OpenCores
URL https://opencores.org/ocsvn/openrisc_me/openrisc_me/trunk

Subversion Repositories openrisc_me

[/] [openrisc/] [trunk/] [gnu-src/] [gcc-4.5.1/] [gcc/] [ira.c] - Blame information for rev 455

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 280 jeremybenn
/* Integrated Register Allocator (IRA) entry point.
2
   Copyright (C) 2006, 2007, 2008, 2009, 2010
3
   Free Software Foundation, Inc.
4
   Contributed by Vladimir Makarov <vmakarov@redhat.com>.
5
 
6
This file is part of GCC.
7
 
8
GCC is free software; you can redistribute it and/or modify it under
9
the terms of the GNU General Public License as published by the Free
10
Software Foundation; either version 3, or (at your option) any later
11
version.
12
 
13
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
14
WARRANTY; without even the implied warranty of MERCHANTABILITY or
15
FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
16
for more details.
17
 
18
You should have received a copy of the GNU General Public License
19
along with GCC; see the file COPYING3.  If not see
20
<http://www.gnu.org/licenses/>.  */
21
 
22
/* The integrated register allocator (IRA) is a
23
   regional register allocator performing graph coloring on a top-down
24
   traversal of nested regions.  Graph coloring in a region is based
25
   on Chaitin-Briggs algorithm.  It is called integrated because
26
   register coalescing, register live range splitting, and choosing a
27
   better hard register are done on-the-fly during coloring.  Register
28
   coalescing and choosing a cheaper hard register is done by hard
29
   register preferencing during hard register assigning.  The live
30
   range splitting is a byproduct of the regional register allocation.
31
 
32
   Major IRA notions are:
33
 
34
     o *Region* is a part of CFG where graph coloring based on
35
       Chaitin-Briggs algorithm is done.  IRA can work on any set of
36
       nested CFG regions forming a tree.  Currently the regions are
37
       the entire function for the root region and natural loops for
38
       the other regions.  Therefore data structure representing a
39
       region is called loop_tree_node.
40
 
41
     o *Cover class* is a register class belonging to a set of
42
       non-intersecting register classes containing all of the
43
       hard-registers available for register allocation.  The set of
44
       all cover classes for a target is defined in the corresponding
45
       machine-description file according some criteria.  Such notion
46
       is needed because Chaitin-Briggs algorithm works on
47
       non-intersected register classes.
48
 
49
     o *Allocno* represents the live range of a pseudo-register in a
50
       region.  Besides the obvious attributes like the corresponding
51
       pseudo-register number, cover class, conflicting allocnos and
52
       conflicting hard-registers, there are a few allocno attributes
53
       which are important for understanding the allocation algorithm:
54
 
55
       - *Live ranges*.  This is a list of ranges of *program
56
         points* where the allocno lives.  Program points represent
57
         places where a pseudo can be born or become dead (there are
58
         approximately two times more program points than the insns)
59
         and they are represented by integers starting with 0.  The
60
         live ranges are used to find conflicts between allocnos of
61
         different cover classes.  They also play very important role
62
         for the transformation of the IRA internal representation of
63
         several regions into a one region representation.  The later is
64
         used during the reload pass work because each allocno
65
         represents all of the corresponding pseudo-registers.
66
 
67
       - *Hard-register costs*.  This is a vector of size equal to the
68
         number of available hard-registers of the allocno's cover
69
         class.  The cost of a callee-clobbered hard-register for an
70
         allocno is increased by the cost of save/restore code around
71
         the calls through the given allocno's life.  If the allocno
72
         is a move instruction operand and another operand is a
73
         hard-register of the allocno's cover class, the cost of the
74
         hard-register is decreased by the move cost.
75
 
76
         When an allocno is assigned, the hard-register with minimal
77
         full cost is used.  Initially, a hard-register's full cost is
78
         the corresponding value from the hard-register's cost vector.
79
         If the allocno is connected by a *copy* (see below) to
80
         another allocno which has just received a hard-register, the
81
         cost of the hard-register is decreased.  Before choosing a
82
         hard-register for an allocno, the allocno's current costs of
83
         the hard-registers are modified by the conflict hard-register
84
         costs of all of the conflicting allocnos which are not
85
         assigned yet.
86
 
87
       - *Conflict hard-register costs*.  This is a vector of the same
88
         size as the hard-register costs vector.  To permit an
89
         unassigned allocno to get a better hard-register, IRA uses
90
         this vector to calculate the final full cost of the
91
         available hard-registers.  Conflict hard-register costs of an
92
         unassigned allocno are also changed with a change of the
93
         hard-register cost of the allocno when a copy involving the
94
         allocno is processed as described above.  This is done to
95
         show other unassigned allocnos that a given allocno prefers
96
         some hard-registers in order to remove the move instruction
97
         corresponding to the copy.
98
 
99
     o *Cap*.  If a pseudo-register does not live in a region but
100
       lives in a nested region, IRA creates a special allocno called
101
       a cap in the outer region.  A region cap is also created for a
102
       subregion cap.
103
 
104
     o *Copy*.  Allocnos can be connected by copies.  Copies are used
105
       to modify hard-register costs for allocnos during coloring.
106
       Such modifications reflects a preference to use the same
107
       hard-register for the allocnos connected by copies.  Usually
108
       copies are created for move insns (in this case it results in
109
       register coalescing).  But IRA also creates copies for operands
110
       of an insn which should be assigned to the same hard-register
111
       due to constraints in the machine description (it usually
112
       results in removing a move generated in reload to satisfy
113
       the constraints) and copies referring to the allocno which is
114
       the output operand of an instruction and the allocno which is
115
       an input operand dying in the instruction (creation of such
116
       copies results in less register shuffling).  IRA *does not*
117
       create copies between the same register allocnos from different
118
       regions because we use another technique for propagating
119
       hard-register preference on the borders of regions.
120
 
121
   Allocnos (including caps) for the upper region in the region tree
122
   *accumulate* information important for coloring from allocnos with
123
   the same pseudo-register from nested regions.  This includes
124
   hard-register and memory costs, conflicts with hard-registers,
125
   allocno conflicts, allocno copies and more.  *Thus, attributes for
126
   allocnos in a region have the same values as if the region had no
127
   subregions*.  It means that attributes for allocnos in the
128
   outermost region corresponding to the function have the same values
129
   as though the allocation used only one region which is the entire
130
   function.  It also means that we can look at IRA work as if the
131
   first IRA did allocation for all function then it improved the
132
   allocation for loops then their subloops and so on.
133
 
134
   IRA major passes are:
135
 
136
     o Building IRA internal representation which consists of the
137
       following subpasses:
138
 
139
       * First, IRA builds regions and creates allocnos (file
140
         ira-build.c) and initializes most of their attributes.
141
 
142
       * Then IRA finds a cover class for each allocno and calculates
143
         its initial (non-accumulated) cost of memory and each
144
         hard-register of its cover class (file ira-cost.c).
145
 
146
       * IRA creates live ranges of each allocno, calulates register
147
         pressure for each cover class in each region, sets up
148
         conflict hard registers for each allocno and info about calls
149
         the allocno lives through (file ira-lives.c).
150
 
151
       * IRA removes low register pressure loops from the regions
152
         mostly to speed IRA up (file ira-build.c).
153
 
154
       * IRA propagates accumulated allocno info from lower region
155
         allocnos to corresponding upper region allocnos (file
156
         ira-build.c).
157
 
158
       * IRA creates all caps (file ira-build.c).
159
 
160
       * Having live-ranges of allocnos and their cover classes, IRA
161
         creates conflicting allocnos of the same cover class for each
162
         allocno.  Conflicting allocnos are stored as a bit vector or
163
         array of pointers to the conflicting allocnos whatever is
164
         more profitable (file ira-conflicts.c).  At this point IRA
165
         creates allocno copies.
166
 
167
     o Coloring.  Now IRA has all necessary info to start graph coloring
168
       process.  It is done in each region on top-down traverse of the
169
       region tree (file ira-color.c).  There are following subpasses:
170
 
171
       * Optional aggressive coalescing of allocnos in the region.
172
 
173
       * Putting allocnos onto the coloring stack.  IRA uses Briggs
174
         optimistic coloring which is a major improvement over
175
         Chaitin's coloring.  Therefore IRA does not spill allocnos at
176
         this point.  There is some freedom in the order of putting
177
         allocnos on the stack which can affect the final result of
178
         the allocation.  IRA uses some heuristics to improve the order.
179
 
180
       * Popping the allocnos from the stack and assigning them hard
181
         registers.  If IRA can not assign a hard register to an
182
         allocno and the allocno is coalesced, IRA undoes the
183
         coalescing and puts the uncoalesced allocnos onto the stack in
184
         the hope that some such allocnos will get a hard register
185
         separately.  If IRA fails to assign hard register or memory
186
         is more profitable for it, IRA spills the allocno.  IRA
187
         assigns the allocno the hard-register with minimal full
188
         allocation cost which reflects the cost of usage of the
189
         hard-register for the allocno and cost of usage of the
190
         hard-register for allocnos conflicting with given allocno.
191
 
192
       * After allono assigning in the region, IRA modifies the hard
193
         register and memory costs for the corresponding allocnos in
194
         the subregions to reflect the cost of possible loads, stores,
195
         or moves on the border of the region and its subregions.
196
         When default regional allocation algorithm is used
197
         (-fira-algorithm=mixed), IRA just propagates the assignment
198
         for allocnos if the register pressure in the region for the
199
         corresponding cover class is less than number of available
200
         hard registers for given cover class.
201
 
202
     o Spill/restore code moving.  When IRA performs an allocation
203
       by traversing regions in top-down order, it does not know what
204
       happens below in the region tree.  Therefore, sometimes IRA
205
       misses opportunities to perform a better allocation.  A simple
206
       optimization tries to improve allocation in a region having
207
       subregions and containing in another region.  If the
208
       corresponding allocnos in the subregion are spilled, it spills
209
       the region allocno if it is profitable.  The optimization
210
       implements a simple iterative algorithm performing profitable
211
       transformations while they are still possible.  It is fast in
212
       practice, so there is no real need for a better time complexity
213
       algorithm.
214
 
215
     o Code change.  After coloring, two allocnos representing the same
216
       pseudo-register outside and inside a region respectively may be
217
       assigned to different locations (hard-registers or memory).  In
218
       this case IRA creates and uses a new pseudo-register inside the
219
       region and adds code to move allocno values on the region's
220
       borders.  This is done during top-down traversal of the regions
221
       (file ira-emit.c).  In some complicated cases IRA can create a
222
       new allocno to move allocno values (e.g. when a swap of values
223
       stored in two hard-registers is needed).  At this stage, the
224
       new allocno is marked as spilled.  IRA still creates the
225
       pseudo-register and the moves on the region borders even when
226
       both allocnos were assigned to the same hard-register.  If the
227
       reload pass spills a pseudo-register for some reason, the
228
       effect will be smaller because another allocno will still be in
229
       the hard-register.  In most cases, this is better then spilling
230
       both allocnos.  If reload does not change the allocation
231
       for the two pseudo-registers, the trivial move will be removed
232
       by post-reload optimizations.  IRA does not generate moves for
233
       allocnos assigned to the same hard register when the default
234
       regional allocation algorithm is used and the register pressure
235
       in the region for the corresponding allocno cover class is less
236
       than number of available hard registers for given cover class.
237
       IRA also does some optimizations to remove redundant stores and
238
       to reduce code duplication on the region borders.
239
 
240
     o Flattening internal representation.  After changing code, IRA
241
       transforms its internal representation for several regions into
242
       one region representation (file ira-build.c).  This process is
243
       called IR flattening.  Such process is more complicated than IR
244
       rebuilding would be, but is much faster.
245
 
246
     o After IR flattening, IRA tries to assign hard registers to all
247
       spilled allocnos.  This is impelemented by a simple and fast
248
       priority coloring algorithm (see function
249
       ira_reassign_conflict_allocnos::ira-color.c).  Here new allocnos
250
       created during the code change pass can be assigned to hard
251
       registers.
252
 
253
     o At the end IRA calls the reload pass.  The reload pass
254
       communicates with IRA through several functions in file
255
       ira-color.c to improve its decisions in
256
 
257
       * sharing stack slots for the spilled pseudos based on IRA info
258
         about pseudo-register conflicts.
259
 
260
       * reassigning hard-registers to all spilled pseudos at the end
261
         of each reload iteration.
262
 
263
       * choosing a better hard-register to spill based on IRA info
264
         about pseudo-register live ranges and the register pressure
265
         in places where the pseudo-register lives.
266
 
267
   IRA uses a lot of data representing the target processors.  These
268
   data are initilized in file ira.c.
269
 
270
   If function has no loops (or the loops are ignored when
271
   -fira-algorithm=CB is used), we have classic Chaitin-Briggs
272
   coloring (only instead of separate pass of coalescing, we use hard
273
   register preferencing).  In such case, IRA works much faster
274
   because many things are not made (like IR flattening, the
275
   spill/restore optimization, and the code change).
276
 
277
   Literature is worth to read for better understanding the code:
278
 
279
   o Preston Briggs, Keith D. Cooper, Linda Torczon.  Improvements to
280
     Graph Coloring Register Allocation.
281
 
282
   o David Callahan, Brian Koblenz.  Register allocation via
283
     hierarchical graph coloring.
284
 
285
   o Keith Cooper, Anshuman Dasgupta, Jason Eckhardt. Revisiting Graph
286
     Coloring Register Allocation: A Study of the Chaitin-Briggs and
287
     Callahan-Koblenz Algorithms.
288
 
289
   o Guei-Yuan Lueh, Thomas Gross, and Ali-Reza Adl-Tabatabai. Global
290
     Register Allocation Based on Graph Fusion.
291
 
292
   o Vladimir Makarov. The Integrated Register Allocator for GCC.
293
 
294
   o Vladimir Makarov.  The top-down register allocator for irregular
295
     register file architectures.
296
 
297
*/
298
 
299
 
300
#include "config.h"
301
#include "system.h"
302
#include "coretypes.h"
303
#include "tm.h"
304
#include "regs.h"
305
#include "rtl.h"
306
#include "tm_p.h"
307
#include "target.h"
308
#include "flags.h"
309
#include "obstack.h"
310
#include "bitmap.h"
311
#include "hard-reg-set.h"
312
#include "basic-block.h"
313
#include "expr.h"
314
#include "recog.h"
315
#include "params.h"
316
#include "timevar.h"
317
#include "tree-pass.h"
318
#include "output.h"
319
#include "except.h"
320
#include "reload.h"
321
#include "errors.h"
322
#include "integrate.h"
323
#include "df.h"
324
#include "ggc.h"
325
#include "ira-int.h"
326
 
327
 
328
/* A modified value of flag `-fira-verbose' used internally.  */
329
int internal_flag_ira_verbose;
330
 
331
/* Dump file of the allocator if it is not NULL.  */
332
FILE *ira_dump_file;
333
 
334
/* Pools for allocnos, copies, allocno live ranges.  */
335
alloc_pool allocno_pool, copy_pool, allocno_live_range_pool;
336
 
337
/* The number of elements in the following array.  */
338
int ira_spilled_reg_stack_slots_num;
339
 
340
/* The following array contains info about spilled pseudo-registers
341
   stack slots used in current function so far.  */
342
struct ira_spilled_reg_stack_slot *ira_spilled_reg_stack_slots;
343
 
344
/* Correspondingly overall cost of the allocation, cost of the
345
   allocnos assigned to hard-registers, cost of the allocnos assigned
346
   to memory, cost of loads, stores and register move insns generated
347
   for pseudo-register live range splitting (see ira-emit.c).  */
348
int ira_overall_cost;
349
int ira_reg_cost, ira_mem_cost;
350
int ira_load_cost, ira_store_cost, ira_shuffle_cost;
351
int ira_move_loops_num, ira_additional_jumps_num;
352
 
353
/* All registers that can be eliminated.  */
354
 
355
HARD_REG_SET eliminable_regset;
356
 
357
/* Map: hard regs X modes -> set of hard registers for storing value
358
   of given mode starting with given hard register.  */
359
HARD_REG_SET ira_reg_mode_hard_regset[FIRST_PSEUDO_REGISTER][NUM_MACHINE_MODES];
360
 
361
/* The following two variables are array analogs of the macros
362
   MEMORY_MOVE_COST and REGISTER_MOVE_COST.  */
363
short int ira_memory_move_cost[MAX_MACHINE_MODE][N_REG_CLASSES][2];
364
move_table *ira_register_move_cost[MAX_MACHINE_MODE];
365
 
366
/* Similar to may_move_in_cost but it is calculated in IRA instead of
367
   regclass.  Another difference is that we take only available hard
368
   registers into account to figure out that one register class is a
369
   subset of the another one.  */
370
move_table *ira_may_move_in_cost[MAX_MACHINE_MODE];
371
 
372
/* Similar to may_move_out_cost but it is calculated in IRA instead of
373
   regclass.  Another difference is that we take only available hard
374
   registers into account to figure out that one register class is a
375
   subset of the another one.  */
376
move_table *ira_may_move_out_cost[MAX_MACHINE_MODE];
377
 
378
/* Register class subset relation: TRUE if the first class is a subset
379
   of the second one considering only hard registers available for the
380
   allocation.  */
381
int ira_class_subset_p[N_REG_CLASSES][N_REG_CLASSES];
382
 
383
/* Temporary hard reg set used for a different calculation.  */
384
static HARD_REG_SET temp_hard_regset;
385
 
386
 
387
 
388
/* The function sets up the map IRA_REG_MODE_HARD_REGSET.  */
389
static void
390
setup_reg_mode_hard_regset (void)
391
{
392
  int i, m, hard_regno;
393
 
394
  for (m = 0; m < NUM_MACHINE_MODES; m++)
395
    for (hard_regno = 0; hard_regno < FIRST_PSEUDO_REGISTER; hard_regno++)
396
      {
397
        CLEAR_HARD_REG_SET (ira_reg_mode_hard_regset[hard_regno][m]);
398
        for (i = hard_regno_nregs[hard_regno][m] - 1; i >= 0; i--)
399
          if (hard_regno + i < FIRST_PSEUDO_REGISTER)
400
            SET_HARD_REG_BIT (ira_reg_mode_hard_regset[hard_regno][m],
401
                              hard_regno + i);
402
      }
403
}
404
 
405
 
406
 
407
/* Hard registers that can not be used for the register allocator for
408
   all functions of the current compilation unit.  */
409
static HARD_REG_SET no_unit_alloc_regs;
410
 
411
/* Array of the number of hard registers of given class which are
412
   available for allocation.  The order is defined by the
413
   allocation order.  */
414
short ira_class_hard_regs[N_REG_CLASSES][FIRST_PSEUDO_REGISTER];
415
 
416
/* The number of elements of the above array for given register
417
   class.  */
418
int ira_class_hard_regs_num[N_REG_CLASSES];
419
 
420
/* Index (in ira_class_hard_regs) for given register class and hard
421
   register (in general case a hard register can belong to several
422
   register classes).  The index is negative for hard registers
423
   unavailable for the allocation. */
424
short ira_class_hard_reg_index[N_REG_CLASSES][FIRST_PSEUDO_REGISTER];
425
 
426
/* The function sets up the three arrays declared above.  */
427
static void
428
setup_class_hard_regs (void)
429
{
430
  int cl, i, hard_regno, n;
431
  HARD_REG_SET processed_hard_reg_set;
432
 
433
  ira_assert (SHRT_MAX >= FIRST_PSEUDO_REGISTER);
434
  /* We could call ORDER_REGS_FOR_LOCAL_ALLOC here (it is usually
435
     putting hard callee-used hard registers first).  But our
436
     heuristics work better.  */
437
  for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
438
    {
439
      COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
440
      AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
441
      CLEAR_HARD_REG_SET (processed_hard_reg_set);
442
      for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
443
        ira_class_hard_reg_index[cl][0] = -1;
444
      for (n = 0, i = 0; i < FIRST_PSEUDO_REGISTER; i++)
445
        {
446
#ifdef REG_ALLOC_ORDER
447
          hard_regno = reg_alloc_order[i];
448
#else
449
          hard_regno = i;
450
#endif
451
          if (TEST_HARD_REG_BIT (processed_hard_reg_set, hard_regno))
452
            continue;
453
          SET_HARD_REG_BIT (processed_hard_reg_set, hard_regno);
454
          if (! TEST_HARD_REG_BIT (temp_hard_regset, hard_regno))
455
            ira_class_hard_reg_index[cl][hard_regno] = -1;
456
          else
457
            {
458
              ira_class_hard_reg_index[cl][hard_regno] = n;
459
              ira_class_hard_regs[cl][n++] = hard_regno;
460
            }
461
        }
462
      ira_class_hard_regs_num[cl] = n;
463
    }
464
}
465
 
466
/* Number of given class hard registers available for the register
467
   allocation for given classes.  */
468
int ira_available_class_regs[N_REG_CLASSES];
469
 
470
/* Set up IRA_AVAILABLE_CLASS_REGS.  */
471
static void
472
setup_available_class_regs (void)
473
{
474
  int i, j;
475
 
476
  memset (ira_available_class_regs, 0, sizeof (ira_available_class_regs));
477
  for (i = 0; i < N_REG_CLASSES; i++)
478
    {
479
      COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
480
      AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
481
      for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
482
        if (TEST_HARD_REG_BIT (temp_hard_regset, j))
483
          ira_available_class_regs[i]++;
484
    }
485
}
486
 
487
/* Set up global variables defining info about hard registers for the
488
   allocation.  These depend on USE_HARD_FRAME_P whose TRUE value means
489
   that we can use the hard frame pointer for the allocation.  */
490
static void
491
setup_alloc_regs (bool use_hard_frame_p)
492
{
493
  COPY_HARD_REG_SET (no_unit_alloc_regs, fixed_reg_set);
494
  if (! use_hard_frame_p)
495
    SET_HARD_REG_BIT (no_unit_alloc_regs, HARD_FRAME_POINTER_REGNUM);
496
  setup_class_hard_regs ();
497
  setup_available_class_regs ();
498
}
499
 
500
 
501
 
502
/* Set up IRA_MEMORY_MOVE_COST, IRA_REGISTER_MOVE_COST.  */
503
static void
504
setup_class_subset_and_memory_move_costs (void)
505
{
506
  int cl, cl2, mode;
507
  HARD_REG_SET temp_hard_regset2;
508
 
509
  for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
510
    ira_memory_move_cost[mode][NO_REGS][0]
511
      = ira_memory_move_cost[mode][NO_REGS][1] = SHRT_MAX;
512
  for (cl = (int) N_REG_CLASSES - 1; cl >= 0; cl--)
513
    {
514
      if (cl != (int) NO_REGS)
515
        for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
516
          {
517
            ira_memory_move_cost[mode][cl][0] =
518
              MEMORY_MOVE_COST ((enum machine_mode) mode,
519
                                (enum reg_class) cl, 0);
520
            ira_memory_move_cost[mode][cl][1] =
521
              MEMORY_MOVE_COST ((enum machine_mode) mode,
522
                                (enum reg_class) cl, 1);
523
            /* Costs for NO_REGS are used in cost calculation on the
524
               1st pass when the preferred register classes are not
525
               known yet.  In this case we take the best scenario.  */
526
            if (ira_memory_move_cost[mode][NO_REGS][0]
527
                > ira_memory_move_cost[mode][cl][0])
528
              ira_memory_move_cost[mode][NO_REGS][0]
529
                = ira_memory_move_cost[mode][cl][0];
530
            if (ira_memory_move_cost[mode][NO_REGS][1]
531
                > ira_memory_move_cost[mode][cl][1])
532
              ira_memory_move_cost[mode][NO_REGS][1]
533
                = ira_memory_move_cost[mode][cl][1];
534
          }
535
      for (cl2 = (int) N_REG_CLASSES - 1; cl2 >= 0; cl2--)
536
        {
537
          COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
538
          AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
539
          COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[cl2]);
540
          AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
541
          ira_class_subset_p[cl][cl2]
542
            = hard_reg_set_subset_p (temp_hard_regset, temp_hard_regset2);
543
        }
544
    }
545
}
546
 
547
 
548
 
549
/* Define the following macro if allocation through malloc if
550
   preferable.  */
551
#define IRA_NO_OBSTACK
552
 
553
#ifndef IRA_NO_OBSTACK
554
/* Obstack used for storing all dynamic data (except bitmaps) of the
555
   IRA.  */
556
static struct obstack ira_obstack;
557
#endif
558
 
559
/* Obstack used for storing all bitmaps of the IRA.  */
560
static struct bitmap_obstack ira_bitmap_obstack;
561
 
562
/* Allocate memory of size LEN for IRA data.  */
563
void *
564
ira_allocate (size_t len)
565
{
566
  void *res;
567
 
568
#ifndef IRA_NO_OBSTACK
569
  res = obstack_alloc (&ira_obstack, len);
570
#else
571
  res = xmalloc (len);
572
#endif
573
  return res;
574
}
575
 
576
/* Reallocate memory PTR of size LEN for IRA data.  */
577
void *
578
ira_reallocate (void *ptr, size_t len)
579
{
580
  void *res;
581
 
582
#ifndef IRA_NO_OBSTACK
583
  res = obstack_alloc (&ira_obstack, len);
584
#else
585
  res = xrealloc (ptr, len);
586
#endif
587
  return res;
588
}
589
 
590
/* Free memory ADDR allocated for IRA data.  */
591
void
592
ira_free (void *addr ATTRIBUTE_UNUSED)
593
{
594
#ifndef IRA_NO_OBSTACK
595
  /* do nothing */
596
#else
597
  free (addr);
598
#endif
599
}
600
 
601
 
602
/* Allocate and returns bitmap for IRA.  */
603
bitmap
604
ira_allocate_bitmap (void)
605
{
606
  return BITMAP_ALLOC (&ira_bitmap_obstack);
607
}
608
 
609
/* Free bitmap B allocated for IRA.  */
610
void
611
ira_free_bitmap (bitmap b ATTRIBUTE_UNUSED)
612
{
613
  /* do nothing */
614
}
615
 
616
 
617
 
618
/* Output information about allocation of all allocnos (except for
619
   caps) into file F.  */
620
void
621
ira_print_disposition (FILE *f)
622
{
623
  int i, n, max_regno;
624
  ira_allocno_t a;
625
  basic_block bb;
626
 
627
  fprintf (f, "Disposition:");
628
  max_regno = max_reg_num ();
629
  for (n = 0, i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
630
    for (a = ira_regno_allocno_map[i];
631
         a != NULL;
632
         a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
633
      {
634
        if (n % 4 == 0)
635
          fprintf (f, "\n");
636
        n++;
637
        fprintf (f, " %4d:r%-4d", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
638
        if ((bb = ALLOCNO_LOOP_TREE_NODE (a)->bb) != NULL)
639
          fprintf (f, "b%-3d", bb->index);
640
        else
641
          fprintf (f, "l%-3d", ALLOCNO_LOOP_TREE_NODE (a)->loop->num);
642
        if (ALLOCNO_HARD_REGNO (a) >= 0)
643
          fprintf (f, " %3d", ALLOCNO_HARD_REGNO (a));
644
        else
645
          fprintf (f, " mem");
646
      }
647
  fprintf (f, "\n");
648
}
649
 
650
/* Outputs information about allocation of all allocnos into
651
   stderr.  */
652
void
653
ira_debug_disposition (void)
654
{
655
  ira_print_disposition (stderr);
656
}
657
 
658
 
659
 
660
/* For each reg class, table listing all the classes contained in it
661
   (excluding the class itself.  Non-allocatable registers are
662
   excluded from the consideration).  */
663
static enum reg_class alloc_reg_class_subclasses[N_REG_CLASSES][N_REG_CLASSES];
664
 
665
/* Initialize the table of subclasses of each reg class.  */
666
static void
667
setup_reg_subclasses (void)
668
{
669
  int i, j;
670
  HARD_REG_SET temp_hard_regset2;
671
 
672
  for (i = 0; i < N_REG_CLASSES; i++)
673
    for (j = 0; j < N_REG_CLASSES; j++)
674
      alloc_reg_class_subclasses[i][j] = LIM_REG_CLASSES;
675
 
676
  for (i = 0; i < N_REG_CLASSES; i++)
677
    {
678
      if (i == (int) NO_REGS)
679
        continue;
680
 
681
      COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
682
      AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
683
      if (hard_reg_set_empty_p (temp_hard_regset))
684
        continue;
685
      for (j = 0; j < N_REG_CLASSES; j++)
686
        if (i != j)
687
          {
688
            enum reg_class *p;
689
 
690
            COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[j]);
691
            AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
692
            if (! hard_reg_set_subset_p (temp_hard_regset,
693
                                         temp_hard_regset2))
694
              continue;
695
            p = &alloc_reg_class_subclasses[j][0];
696
            while (*p != LIM_REG_CLASSES) p++;
697
            *p = (enum reg_class) i;
698
          }
699
    }
700
}
701
 
702
 
703
 
704
/* Number of cover classes.  Cover classes is non-intersected register
705
   classes containing all hard-registers available for the
706
   allocation.  */
707
int ira_reg_class_cover_size;
708
 
709
/* The array containing cover classes (see also comments for macro
710
   IRA_COVER_CLASSES).  Only first IRA_REG_CLASS_COVER_SIZE elements are
711
   used for this.  */
712
enum reg_class ira_reg_class_cover[N_REG_CLASSES];
713
 
714
/* The number of elements in the subsequent array.  */
715
int ira_important_classes_num;
716
 
717
/* The array containing non-empty classes (including non-empty cover
718
   classes) which are subclasses of cover classes.  Such classes is
719
   important for calculation of the hard register usage costs.  */
720
enum reg_class ira_important_classes[N_REG_CLASSES];
721
 
722
/* The array containing indexes of important classes in the previous
723
   array.  The array elements are defined only for important
724
   classes.  */
725
int ira_important_class_nums[N_REG_CLASSES];
726
 
727
/* Set the four global variables defined above.  */
728
static void
729
setup_cover_and_important_classes (void)
730
{
731
  int i, j, n, cl;
732
  bool set_p;
733
  const enum reg_class *cover_classes;
734
  HARD_REG_SET temp_hard_regset2;
735
  static enum reg_class classes[LIM_REG_CLASSES + 1];
736
 
737
  if (targetm.ira_cover_classes == NULL)
738
    cover_classes = NULL;
739
  else
740
    cover_classes = targetm.ira_cover_classes ();
741
  if (cover_classes == NULL)
742
    ira_assert (flag_ira_algorithm == IRA_ALGORITHM_PRIORITY);
743
  else
744
    {
745
      for (i = 0; (cl = cover_classes[i]) != LIM_REG_CLASSES; i++)
746
        classes[i] = (enum reg_class) cl;
747
      classes[i] = LIM_REG_CLASSES;
748
    }
749
 
750
  if (flag_ira_algorithm == IRA_ALGORITHM_PRIORITY)
751
    {
752
      n = 0;
753
      for (i = 0; i <= LIM_REG_CLASSES; i++)
754
        {
755
          if (i == NO_REGS)
756
            continue;
757
#ifdef CONSTRAINT_NUM_DEFINED_P
758
          for (j = 0; j < CONSTRAINT__LIMIT; j++)
759
            if ((int) REG_CLASS_FOR_CONSTRAINT ((enum constraint_num) j) == i)
760
              break;
761
          if (j < CONSTRAINT__LIMIT)
762
            {
763
              classes[n++] = (enum reg_class) i;
764
              continue;
765
            }
766
#endif
767
          COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[i]);
768
          AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
769
          for (j = 0; j < LIM_REG_CLASSES; j++)
770
            {
771
              if (i == j)
772
                continue;
773
              COPY_HARD_REG_SET (temp_hard_regset2, reg_class_contents[j]);
774
              AND_COMPL_HARD_REG_SET (temp_hard_regset2,
775
                                      no_unit_alloc_regs);
776
              if (hard_reg_set_equal_p (temp_hard_regset,
777
                                        temp_hard_regset2))
778
                    break;
779
            }
780
          if (j >= i)
781
            classes[n++] = (enum reg_class) i;
782
        }
783
      classes[n] = LIM_REG_CLASSES;
784
    }
785
 
786
  ira_reg_class_cover_size = 0;
787
  for (i = 0; (cl = classes[i]) != LIM_REG_CLASSES; i++)
788
    {
789
      for (j = 0; j < i; j++)
790
        if (flag_ira_algorithm != IRA_ALGORITHM_PRIORITY
791
            && reg_classes_intersect_p ((enum reg_class) cl, classes[j]))
792
          gcc_unreachable ();
793
      COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
794
      AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
795
      if (! hard_reg_set_empty_p (temp_hard_regset))
796
        ira_reg_class_cover[ira_reg_class_cover_size++] = (enum reg_class) cl;
797
    }
798
  ira_important_classes_num = 0;
799
  for (cl = 0; cl < N_REG_CLASSES; cl++)
800
    {
801
      COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
802
      AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
803
      if (! hard_reg_set_empty_p (temp_hard_regset))
804
        {
805
          set_p = false;
806
          for (j = 0; j < ira_reg_class_cover_size; j++)
807
            {
808
              COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
809
              AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
810
              COPY_HARD_REG_SET (temp_hard_regset2,
811
                                 reg_class_contents[ira_reg_class_cover[j]]);
812
              AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
813
              if ((enum reg_class) cl == ira_reg_class_cover[j]
814
                  || hard_reg_set_equal_p (temp_hard_regset,
815
                                           temp_hard_regset2))
816
                break;
817
              else if (hard_reg_set_subset_p (temp_hard_regset,
818
                                              temp_hard_regset2))
819
                set_p = true;
820
            }
821
          if (set_p && j >= ira_reg_class_cover_size)
822
            ira_important_classes[ira_important_classes_num++]
823
              = (enum reg_class) cl;
824
        }
825
    }
826
  for (j = 0; j < ira_reg_class_cover_size; j++)
827
    ira_important_classes[ira_important_classes_num++]
828
      = ira_reg_class_cover[j];
829
}
830
 
831
/* Map of all register classes to corresponding cover class containing
832
   the given class.  If given class is not a subset of a cover class,
833
   we translate it into the cheapest cover class.  */
834
enum reg_class ira_class_translate[N_REG_CLASSES];
835
 
836
/* Set up array IRA_CLASS_TRANSLATE.  */
837
static void
838
setup_class_translate (void)
839
{
840
  int cl, mode;
841
  enum reg_class cover_class, best_class, *cl_ptr;
842
  int i, cost, min_cost, best_cost;
843
 
844
  for (cl = 0; cl < N_REG_CLASSES; cl++)
845
    ira_class_translate[cl] = NO_REGS;
846
 
847
  if (flag_ira_algorithm == IRA_ALGORITHM_PRIORITY)
848
    for (cl = 0; cl < LIM_REG_CLASSES; cl++)
849
      {
850
        COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
851
        AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
852
        for (i = 0; i < ira_reg_class_cover_size; i++)
853
          {
854
            HARD_REG_SET temp_hard_regset2;
855
 
856
            cover_class = ira_reg_class_cover[i];
857
            COPY_HARD_REG_SET (temp_hard_regset2,
858
                               reg_class_contents[cover_class]);
859
            AND_COMPL_HARD_REG_SET (temp_hard_regset2, no_unit_alloc_regs);
860
            if (hard_reg_set_equal_p (temp_hard_regset, temp_hard_regset2))
861
              ira_class_translate[cl] = cover_class;
862
          }
863
      }
864
  for (i = 0; i < ira_reg_class_cover_size; i++)
865
    {
866
      cover_class = ira_reg_class_cover[i];
867
      if (flag_ira_algorithm != IRA_ALGORITHM_PRIORITY)
868
        for (cl_ptr = &alloc_reg_class_subclasses[cover_class][0];
869
             (cl = *cl_ptr) != LIM_REG_CLASSES;
870
             cl_ptr++)
871
          {
872
            if (ira_class_translate[cl] == NO_REGS)
873
              ira_class_translate[cl] = cover_class;
874
#ifdef ENABLE_IRA_CHECKING
875
            else
876
              {
877
                COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
878
                AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
879
                if (! hard_reg_set_empty_p (temp_hard_regset))
880
                  gcc_unreachable ();
881
              }
882
#endif
883
          }
884
      ira_class_translate[cover_class] = cover_class;
885
    }
886
  /* For classes which are not fully covered by a cover class (in
887
     other words covered by more one cover class), use the cheapest
888
     cover class.  */
889
  for (cl = 0; cl < N_REG_CLASSES; cl++)
890
    {
891
      if (cl == NO_REGS || ira_class_translate[cl] != NO_REGS)
892
        continue;
893
      best_class = NO_REGS;
894
      best_cost = INT_MAX;
895
      for (i = 0; i < ira_reg_class_cover_size; i++)
896
        {
897
          cover_class = ira_reg_class_cover[i];
898
          COPY_HARD_REG_SET (temp_hard_regset,
899
                             reg_class_contents[cover_class]);
900
          AND_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl]);
901
          AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
902
          if (! hard_reg_set_empty_p (temp_hard_regset))
903
            {
904
              min_cost = INT_MAX;
905
              for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
906
                {
907
                  cost = (ira_memory_move_cost[mode][cl][0]
908
                          + ira_memory_move_cost[mode][cl][1]);
909
                  if (min_cost > cost)
910
                    min_cost = cost;
911
                }
912
              if (best_class == NO_REGS || best_cost > min_cost)
913
                {
914
                  best_class = cover_class;
915
                  best_cost = min_cost;
916
                }
917
            }
918
        }
919
      ira_class_translate[cl] = best_class;
920
    }
921
}
922
 
923
/* Order numbers of cover classes in original target cover class
924
   array, -1 for non-cover classes.  */
925
static int cover_class_order[N_REG_CLASSES];
926
 
927
/* The function used to sort the important classes.  */
928
static int
929
comp_reg_classes_func (const void *v1p, const void *v2p)
930
{
931
  enum reg_class cl1 = *(const enum reg_class *) v1p;
932
  enum reg_class cl2 = *(const enum reg_class *) v2p;
933
  int diff;
934
 
935
  cl1 = ira_class_translate[cl1];
936
  cl2 = ira_class_translate[cl2];
937
  if (cl1 != NO_REGS && cl2 != NO_REGS
938
      && (diff = cover_class_order[cl1] - cover_class_order[cl2]) != 0)
939
    return diff;
940
  return (int) cl1 - (int) cl2;
941
}
942
 
943
/* Reorder important classes according to the order of their cover
944
   classes.  Set up array ira_important_class_nums too.  */
945
static void
946
reorder_important_classes (void)
947
{
948
  int i;
949
 
950
  for (i = 0; i < N_REG_CLASSES; i++)
951
    cover_class_order[i] = -1;
952
  for (i = 0; i < ira_reg_class_cover_size; i++)
953
    cover_class_order[ira_reg_class_cover[i]] = i;
954
  qsort (ira_important_classes, ira_important_classes_num,
955
         sizeof (enum reg_class), comp_reg_classes_func);
956
  for (i = 0; i < ira_important_classes_num; i++)
957
    ira_important_class_nums[ira_important_classes[i]] = i;
958
}
959
 
960
/* The biggest important reg_class inside of intersection of the two
961
   reg_classes (that is calculated taking only hard registers
962
   available for allocation into account).  If the both reg_classes
963
   contain no hard registers available for allocation, the value is
964
   calculated by taking all hard-registers including fixed ones into
965
   account.  */
966
enum reg_class ira_reg_class_intersect[N_REG_CLASSES][N_REG_CLASSES];
967
 
968
/* True if the two classes (that is calculated taking only hard
969
   registers available for allocation into account) are
970
   intersected.  */
971
bool ira_reg_classes_intersect_p[N_REG_CLASSES][N_REG_CLASSES];
972
 
973
/* Important classes with end marker LIM_REG_CLASSES which are
974
   supersets with given important class (the first index).  That
975
   includes given class itself.  This is calculated taking only hard
976
   registers available for allocation into account.  */
977
enum reg_class ira_reg_class_super_classes[N_REG_CLASSES][N_REG_CLASSES];
978
 
979
/* The biggest important reg_class inside of union of the two
980
   reg_classes (that is calculated taking only hard registers
981
   available for allocation into account).  If the both reg_classes
982
   contain no hard registers available for allocation, the value is
983
   calculated by taking all hard-registers including fixed ones into
984
   account.  In other words, the value is the corresponding
985
   reg_class_subunion value.  */
986
enum reg_class ira_reg_class_union[N_REG_CLASSES][N_REG_CLASSES];
987
 
988
/* Set up the above reg class relations.  */
989
static void
990
setup_reg_class_relations (void)
991
{
992
  int i, cl1, cl2, cl3;
993
  HARD_REG_SET intersection_set, union_set, temp_set2;
994
  bool important_class_p[N_REG_CLASSES];
995
 
996
  memset (important_class_p, 0, sizeof (important_class_p));
997
  for (i = 0; i < ira_important_classes_num; i++)
998
    important_class_p[ira_important_classes[i]] = true;
999
  for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1000
    {
1001
      ira_reg_class_super_classes[cl1][0] = LIM_REG_CLASSES;
1002
      for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1003
        {
1004
          ira_reg_classes_intersect_p[cl1][cl2] = false;
1005
          ira_reg_class_intersect[cl1][cl2] = NO_REGS;
1006
          COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl1]);
1007
          AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1008
          COPY_HARD_REG_SET (temp_set2, reg_class_contents[cl2]);
1009
          AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1010
          if (hard_reg_set_empty_p (temp_hard_regset)
1011
              && hard_reg_set_empty_p (temp_set2))
1012
            {
1013
              for (i = 0;; i++)
1014
                {
1015
                  cl3 = reg_class_subclasses[cl1][i];
1016
                  if (cl3 == LIM_REG_CLASSES)
1017
                    break;
1018
                  if (reg_class_subset_p (ira_reg_class_intersect[cl1][cl2],
1019
                                          (enum reg_class) cl3))
1020
                    ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1021
                }
1022
              ira_reg_class_union[cl1][cl2] = reg_class_subunion[cl1][cl2];
1023
              continue;
1024
            }
1025
          ira_reg_classes_intersect_p[cl1][cl2]
1026
            = hard_reg_set_intersect_p (temp_hard_regset, temp_set2);
1027
          if (important_class_p[cl1] && important_class_p[cl2]
1028
              && hard_reg_set_subset_p (temp_hard_regset, temp_set2))
1029
            {
1030
              enum reg_class *p;
1031
 
1032
              p = &ira_reg_class_super_classes[cl1][0];
1033
              while (*p != LIM_REG_CLASSES)
1034
                p++;
1035
              *p++ = (enum reg_class) cl2;
1036
              *p = LIM_REG_CLASSES;
1037
            }
1038
          ira_reg_class_union[cl1][cl2] = NO_REGS;
1039
          COPY_HARD_REG_SET (intersection_set, reg_class_contents[cl1]);
1040
          AND_HARD_REG_SET (intersection_set, reg_class_contents[cl2]);
1041
          AND_COMPL_HARD_REG_SET (intersection_set, no_unit_alloc_regs);
1042
          COPY_HARD_REG_SET (union_set, reg_class_contents[cl1]);
1043
          IOR_HARD_REG_SET (union_set, reg_class_contents[cl2]);
1044
          AND_COMPL_HARD_REG_SET (union_set, no_unit_alloc_regs);
1045
          for (i = 0; i < ira_important_classes_num; i++)
1046
            {
1047
              cl3 = ira_important_classes[i];
1048
              COPY_HARD_REG_SET (temp_hard_regset, reg_class_contents[cl3]);
1049
              AND_COMPL_HARD_REG_SET (temp_hard_regset, no_unit_alloc_regs);
1050
              if (hard_reg_set_subset_p (temp_hard_regset, intersection_set))
1051
                {
1052
                  COPY_HARD_REG_SET
1053
                    (temp_set2,
1054
                     reg_class_contents[(int)
1055
                                        ira_reg_class_intersect[cl1][cl2]]);
1056
                  AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1057
                  if (! hard_reg_set_subset_p (temp_hard_regset, temp_set2)
1058
                      /* Ignore unavailable hard registers and prefer
1059
                         smallest class for debugging purposes.  */
1060
                      || (hard_reg_set_equal_p (temp_hard_regset, temp_set2)
1061
                          && hard_reg_set_subset_p
1062
                             (reg_class_contents[cl3],
1063
                              reg_class_contents
1064
                              [(int) ira_reg_class_intersect[cl1][cl2]])))
1065
                    ira_reg_class_intersect[cl1][cl2] = (enum reg_class) cl3;
1066
                }
1067
              if (hard_reg_set_subset_p (temp_hard_regset, union_set))
1068
                {
1069
                  COPY_HARD_REG_SET
1070
                    (temp_set2,
1071
                     reg_class_contents[(int) ira_reg_class_union[cl1][cl2]]);
1072
                  AND_COMPL_HARD_REG_SET (temp_set2, no_unit_alloc_regs);
1073
                  if (ira_reg_class_union[cl1][cl2] == NO_REGS
1074
                      || (hard_reg_set_subset_p (temp_set2, temp_hard_regset)
1075
 
1076
                          && (! hard_reg_set_equal_p (temp_set2,
1077
                                                      temp_hard_regset)
1078
                              /* Ignore unavailable hard registers and
1079
                                 prefer smallest class for debugging
1080
                                 purposes.  */
1081
                              || hard_reg_set_subset_p
1082
                                 (reg_class_contents[cl3],
1083
                                  reg_class_contents
1084
                                  [(int) ira_reg_class_union[cl1][cl2]]))))
1085
                    ira_reg_class_union[cl1][cl2] = (enum reg_class) cl3;
1086
                }
1087
            }
1088
        }
1089
    }
1090
}
1091
 
1092
/* Output all cover classes and the translation map into file F.  */
1093
static void
1094
print_class_cover (FILE *f)
1095
{
1096
  static const char *const reg_class_names[] = REG_CLASS_NAMES;
1097
  int i;
1098
 
1099
  fprintf (f, "Class cover:\n");
1100
  for (i = 0; i < ira_reg_class_cover_size; i++)
1101
    fprintf (f, " %s", reg_class_names[ira_reg_class_cover[i]]);
1102
  fprintf (f, "\nClass translation:\n");
1103
  for (i = 0; i < N_REG_CLASSES; i++)
1104
    fprintf (f, " %s -> %s\n", reg_class_names[i],
1105
             reg_class_names[ira_class_translate[i]]);
1106
}
1107
 
1108
/* Output all cover classes and the translation map into
1109
   stderr.  */
1110
void
1111
ira_debug_class_cover (void)
1112
{
1113
  print_class_cover (stderr);
1114
}
1115
 
1116
/* Set up different arrays concerning class subsets, cover and
1117
   important classes.  */
1118
static void
1119
find_reg_class_closure (void)
1120
{
1121
  setup_reg_subclasses ();
1122
  setup_cover_and_important_classes ();
1123
  setup_class_translate ();
1124
  reorder_important_classes ();
1125
  setup_reg_class_relations ();
1126
}
1127
 
1128
 
1129
 
1130
/* Map: hard register number -> cover class it belongs to.  If the
1131
   corresponding class is NO_REGS, the hard register is not available
1132
   for allocation.  */
1133
enum reg_class ira_hard_regno_cover_class[FIRST_PSEUDO_REGISTER];
1134
 
1135
/* Set up the array above.  */
1136
static void
1137
setup_hard_regno_cover_class (void)
1138
{
1139
  int i, j;
1140
  enum reg_class cl;
1141
 
1142
  for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1143
    {
1144
      ira_hard_regno_cover_class[i] = NO_REGS;
1145
      for (j = 0; j < ira_reg_class_cover_size; j++)
1146
        {
1147
          cl = ira_reg_class_cover[j];
1148
          if (ira_class_hard_reg_index[cl][i] >= 0)
1149
            {
1150
              ira_hard_regno_cover_class[i] = cl;
1151
              break;
1152
            }
1153
        }
1154
 
1155
    }
1156
}
1157
 
1158
 
1159
 
1160
/* Map: register class x machine mode -> number of hard registers of
1161
   given class needed to store value of given mode.  If the number is
1162
   different, the size will be negative.  */
1163
int ira_reg_class_nregs[N_REG_CLASSES][MAX_MACHINE_MODE];
1164
 
1165
/* Maximal value of the previous array elements.  */
1166
int ira_max_nregs;
1167
 
1168
/* Form IRA_REG_CLASS_NREGS map.  */
1169
static void
1170
setup_reg_class_nregs (void)
1171
{
1172
  int cl, m;
1173
 
1174
  ira_max_nregs = -1;
1175
  for (cl = 0; cl < N_REG_CLASSES; cl++)
1176
    for (m = 0; m < MAX_MACHINE_MODE; m++)
1177
      {
1178
        ira_reg_class_nregs[cl][m] = CLASS_MAX_NREGS ((enum reg_class) cl,
1179
                                                      (enum machine_mode) m);
1180
        if (ira_max_nregs < ira_reg_class_nregs[cl][m])
1181
          ira_max_nregs = ira_reg_class_nregs[cl][m];
1182
      }
1183
}
1184
 
1185
 
1186
 
1187
/* Array whose values are hard regset of hard registers available for
1188
   the allocation of given register class whose HARD_REGNO_MODE_OK
1189
   values for given mode are zero.  */
1190
HARD_REG_SET prohibited_class_mode_regs[N_REG_CLASSES][NUM_MACHINE_MODES];
1191
 
1192
/* Set up PROHIBITED_CLASS_MODE_REGS.  */
1193
static void
1194
setup_prohibited_class_mode_regs (void)
1195
{
1196
  int i, j, k, hard_regno;
1197
  enum reg_class cl;
1198
 
1199
  for (i = 0; i < ira_reg_class_cover_size; i++)
1200
    {
1201
      cl = ira_reg_class_cover[i];
1202
      for (j = 0; j < NUM_MACHINE_MODES; j++)
1203
        {
1204
          CLEAR_HARD_REG_SET (prohibited_class_mode_regs[cl][j]);
1205
          for (k = ira_class_hard_regs_num[cl] - 1; k >= 0; k--)
1206
            {
1207
              hard_regno = ira_class_hard_regs[cl][k];
1208
              if (! HARD_REGNO_MODE_OK (hard_regno, (enum machine_mode) j))
1209
                SET_HARD_REG_BIT (prohibited_class_mode_regs[cl][j],
1210
                                  hard_regno);
1211
            }
1212
        }
1213
    }
1214
}
1215
 
1216
 
1217
 
1218
/* Allocate and initialize IRA_REGISTER_MOVE_COST,
1219
   IRA_MAY_MOVE_IN_COST, and IRA_MAY_MOVE_OUT_COST for MODE if it is
1220
   not done yet.  */
1221
void
1222
ira_init_register_move_cost (enum machine_mode mode)
1223
{
1224
  int cl1, cl2;
1225
 
1226
  ira_assert (ira_register_move_cost[mode] == NULL
1227
              && ira_may_move_in_cost[mode] == NULL
1228
              && ira_may_move_out_cost[mode] == NULL);
1229
  if (move_cost[mode] == NULL)
1230
    init_move_cost (mode);
1231
  ira_register_move_cost[mode] = move_cost[mode];
1232
  /* Don't use ira_allocate because the tables exist out of scope of a
1233
     IRA call.  */
1234
  ira_may_move_in_cost[mode]
1235
    = (move_table *) xmalloc (sizeof (move_table) * N_REG_CLASSES);
1236
  memcpy (ira_may_move_in_cost[mode], may_move_in_cost[mode],
1237
          sizeof (move_table) * N_REG_CLASSES);
1238
  ira_may_move_out_cost[mode]
1239
    = (move_table *) xmalloc (sizeof (move_table) * N_REG_CLASSES);
1240
  memcpy (ira_may_move_out_cost[mode], may_move_out_cost[mode],
1241
          sizeof (move_table) * N_REG_CLASSES);
1242
  for (cl1 = 0; cl1 < N_REG_CLASSES; cl1++)
1243
    {
1244
      for (cl2 = 0; cl2 < N_REG_CLASSES; cl2++)
1245
        {
1246
          if (ira_class_subset_p[cl1][cl2])
1247
            ira_may_move_in_cost[mode][cl1][cl2] = 0;
1248
          if (ira_class_subset_p[cl2][cl1])
1249
            ira_may_move_out_cost[mode][cl1][cl2] = 0;
1250
        }
1251
    }
1252
}
1253
 
1254
 
1255
 
1256
/* This is called once during compiler work.  It sets up
1257
   different arrays whose values don't depend on the compiled
1258
   function.  */
1259
void
1260
ira_init_once (void)
1261
{
1262
  int mode;
1263
 
1264
  for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1265
    {
1266
      ira_register_move_cost[mode] = NULL;
1267
      ira_may_move_in_cost[mode] = NULL;
1268
      ira_may_move_out_cost[mode] = NULL;
1269
    }
1270
  ira_init_costs_once ();
1271
}
1272
 
1273
/* Free ira_register_move_cost, ira_may_move_in_cost, and
1274
   ira_may_move_out_cost for each mode.  */
1275
static void
1276
free_register_move_costs (void)
1277
{
1278
  int mode;
1279
 
1280
  for (mode = 0; mode < MAX_MACHINE_MODE; mode++)
1281
    {
1282
      if (ira_may_move_in_cost[mode] != NULL)
1283
        free (ira_may_move_in_cost[mode]);
1284
      if (ira_may_move_out_cost[mode] != NULL)
1285
        free (ira_may_move_out_cost[mode]);
1286
      ira_register_move_cost[mode] = NULL;
1287
      ira_may_move_in_cost[mode] = NULL;
1288
      ira_may_move_out_cost[mode] = NULL;
1289
    }
1290
}
1291
 
1292
/* This is called every time when register related information is
1293
   changed.  */
1294
void
1295
ira_init (void)
1296
{
1297
  free_register_move_costs ();
1298
  setup_reg_mode_hard_regset ();
1299
  setup_alloc_regs (flag_omit_frame_pointer != 0);
1300
  setup_class_subset_and_memory_move_costs ();
1301
  find_reg_class_closure ();
1302
  setup_hard_regno_cover_class ();
1303
  setup_reg_class_nregs ();
1304
  setup_prohibited_class_mode_regs ();
1305
  ira_init_costs ();
1306
}
1307
 
1308
/* Function called once at the end of compiler work.  */
1309
void
1310
ira_finish_once (void)
1311
{
1312
  ira_finish_costs_once ();
1313
  free_register_move_costs ();
1314
}
1315
 
1316
 
1317
 
1318
/* Array whose values are hard regset of hard registers for which
1319
   move of the hard register in given mode into itself is
1320
   prohibited.  */
1321
HARD_REG_SET ira_prohibited_mode_move_regs[NUM_MACHINE_MODES];
1322
 
1323
/* Flag of that the above array has been initialized.  */
1324
static bool ira_prohibited_mode_move_regs_initialized_p = false;
1325
 
1326
/* Set up IRA_PROHIBITED_MODE_MOVE_REGS.  */
1327
static void
1328
setup_prohibited_mode_move_regs (void)
1329
{
1330
  int i, j;
1331
  rtx test_reg1, test_reg2, move_pat, move_insn;
1332
 
1333
  if (ira_prohibited_mode_move_regs_initialized_p)
1334
    return;
1335
  ira_prohibited_mode_move_regs_initialized_p = true;
1336
  test_reg1 = gen_rtx_REG (VOIDmode, 0);
1337
  test_reg2 = gen_rtx_REG (VOIDmode, 0);
1338
  move_pat = gen_rtx_SET (VOIDmode, test_reg1, test_reg2);
1339
  move_insn = gen_rtx_INSN (VOIDmode, 0, 0, 0, 0, 0, move_pat, -1, 0);
1340
  for (i = 0; i < NUM_MACHINE_MODES; i++)
1341
    {
1342
      SET_HARD_REG_SET (ira_prohibited_mode_move_regs[i]);
1343
      for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
1344
        {
1345
          if (! HARD_REGNO_MODE_OK (j, (enum machine_mode) i))
1346
            continue;
1347
          SET_REGNO (test_reg1, j);
1348
          PUT_MODE (test_reg1, (enum machine_mode) i);
1349
          SET_REGNO (test_reg2, j);
1350
          PUT_MODE (test_reg2, (enum machine_mode) i);
1351
          INSN_CODE (move_insn) = -1;
1352
          recog_memoized (move_insn);
1353
          if (INSN_CODE (move_insn) < 0)
1354
            continue;
1355
          extract_insn (move_insn);
1356
          if (! constrain_operands (1))
1357
            continue;
1358
          CLEAR_HARD_REG_BIT (ira_prohibited_mode_move_regs[i], j);
1359
        }
1360
    }
1361
}
1362
 
1363
 
1364
 
1365
/* Function specific hard registers that can not be used for the
1366
   register allocation.  */
1367
HARD_REG_SET ira_no_alloc_regs;
1368
 
1369
/* Return TRUE if *LOC contains an asm.  */
1370
static int
1371
insn_contains_asm_1 (rtx *loc, void *data ATTRIBUTE_UNUSED)
1372
{
1373
  if ( !*loc)
1374
    return FALSE;
1375
  if (GET_CODE (*loc) == ASM_OPERANDS)
1376
    return TRUE;
1377
  return FALSE;
1378
}
1379
 
1380
 
1381
/* Return TRUE if INSN contains an ASM.  */
1382
static bool
1383
insn_contains_asm (rtx insn)
1384
{
1385
  return for_each_rtx (&insn, insn_contains_asm_1, NULL);
1386
}
1387
 
1388
/* Set up regs_asm_clobbered.  */
1389
static void
1390
compute_regs_asm_clobbered (char *regs_asm_clobbered)
1391
{
1392
  basic_block bb;
1393
 
1394
  memset (regs_asm_clobbered, 0, sizeof (char) * FIRST_PSEUDO_REGISTER);
1395
 
1396
  FOR_EACH_BB (bb)
1397
    {
1398
      rtx insn;
1399
      FOR_BB_INSNS_REVERSE (bb, insn)
1400
        {
1401
          df_ref *def_rec;
1402
 
1403
          if (insn_contains_asm (insn))
1404
            for (def_rec = DF_INSN_DEFS (insn); *def_rec; def_rec++)
1405
              {
1406
                df_ref def = *def_rec;
1407
                unsigned int dregno = DF_REF_REGNO (def);
1408
                if (dregno < FIRST_PSEUDO_REGISTER)
1409
                  {
1410
                    unsigned int i;
1411
                    enum machine_mode mode = GET_MODE (DF_REF_REAL_REG (def));
1412
                    unsigned int end = dregno
1413
                      + hard_regno_nregs[dregno][mode] - 1;
1414
 
1415
                    for (i = dregno; i <= end; ++i)
1416
                      regs_asm_clobbered[i] = 1;
1417
                  }
1418
              }
1419
        }
1420
    }
1421
}
1422
 
1423
 
1424
/* Set up ELIMINABLE_REGSET, IRA_NO_ALLOC_REGS, and REGS_EVER_LIVE.  */
1425
void
1426
ira_setup_eliminable_regset (void)
1427
{
1428
  /* Like regs_ever_live, but 1 if a reg is set or clobbered from an
1429
     asm.  Unlike regs_ever_live, elements of this array corresponding
1430
     to eliminable regs (like the frame pointer) are set if an asm
1431
     sets them.  */
1432
  char *regs_asm_clobbered
1433
    = (char *) alloca (FIRST_PSEUDO_REGISTER * sizeof (char));
1434
#ifdef ELIMINABLE_REGS
1435
  int i;
1436
  static const struct {const int from, to; } eliminables[] = ELIMINABLE_REGS;
1437
#endif
1438
  /* FIXME: If EXIT_IGNORE_STACK is set, we will not save and restore
1439
     sp for alloca.  So we can't eliminate the frame pointer in that
1440
     case.  At some point, we should improve this by emitting the
1441
     sp-adjusting insns for this case.  */
1442
  int need_fp
1443
    = (! flag_omit_frame_pointer
1444
       || (cfun->calls_alloca && EXIT_IGNORE_STACK)
1445
       /* We need the frame pointer to catch stack overflow exceptions
1446
          if the stack pointer is moving.  */
1447
       || (flag_stack_check && STACK_CHECK_MOVING_SP)
1448
       || crtl->accesses_prior_frames
1449
       || crtl->stack_realign_needed
1450
       || targetm.frame_pointer_required ());
1451
 
1452
  frame_pointer_needed = need_fp;
1453
 
1454
  COPY_HARD_REG_SET (ira_no_alloc_regs, no_unit_alloc_regs);
1455
  CLEAR_HARD_REG_SET (eliminable_regset);
1456
 
1457
  compute_regs_asm_clobbered (regs_asm_clobbered);
1458
  /* Build the regset of all eliminable registers and show we can't
1459
     use those that we already know won't be eliminated.  */
1460
#ifdef ELIMINABLE_REGS
1461
  for (i = 0; i < (int) ARRAY_SIZE (eliminables); i++)
1462
    {
1463
      bool cannot_elim
1464
        = (! targetm.can_eliminate (eliminables[i].from, eliminables[i].to)
1465
           || (eliminables[i].to == STACK_POINTER_REGNUM && need_fp));
1466
 
1467
      if (! regs_asm_clobbered[eliminables[i].from])
1468
        {
1469
            SET_HARD_REG_BIT (eliminable_regset, eliminables[i].from);
1470
 
1471
            if (cannot_elim)
1472
              SET_HARD_REG_BIT (ira_no_alloc_regs, eliminables[i].from);
1473
        }
1474
      else if (cannot_elim)
1475
        error ("%s cannot be used in asm here",
1476
               reg_names[eliminables[i].from]);
1477
      else
1478
        df_set_regs_ever_live (eliminables[i].from, true);
1479
    }
1480
#if FRAME_POINTER_REGNUM != HARD_FRAME_POINTER_REGNUM
1481
  if (! regs_asm_clobbered[HARD_FRAME_POINTER_REGNUM])
1482
    {
1483
      SET_HARD_REG_BIT (eliminable_regset, HARD_FRAME_POINTER_REGNUM);
1484
      if (need_fp)
1485
        SET_HARD_REG_BIT (ira_no_alloc_regs, HARD_FRAME_POINTER_REGNUM);
1486
    }
1487
  else if (need_fp)
1488
    error ("%s cannot be used in asm here",
1489
           reg_names[HARD_FRAME_POINTER_REGNUM]);
1490
  else
1491
    df_set_regs_ever_live (HARD_FRAME_POINTER_REGNUM, true);
1492
#endif
1493
 
1494
#else
1495
  if (! regs_asm_clobbered[FRAME_POINTER_REGNUM])
1496
    {
1497
      SET_HARD_REG_BIT (eliminable_regset, FRAME_POINTER_REGNUM);
1498
      if (need_fp)
1499
        SET_HARD_REG_BIT (ira_no_alloc_regs, FRAME_POINTER_REGNUM);
1500
    }
1501
  else if (need_fp)
1502
    error ("%s cannot be used in asm here", reg_names[FRAME_POINTER_REGNUM]);
1503
  else
1504
    df_set_regs_ever_live (FRAME_POINTER_REGNUM, true);
1505
#endif
1506
}
1507
 
1508
 
1509
 
1510
/* The length of the following two arrays.  */
1511
int ira_reg_equiv_len;
1512
 
1513
/* The element value is TRUE if the corresponding regno value is
1514
   invariant.  */
1515
bool *ira_reg_equiv_invariant_p;
1516
 
1517
/* The element value is equiv constant of given pseudo-register or
1518
   NULL_RTX.  */
1519
rtx *ira_reg_equiv_const;
1520
 
1521
/* Set up the two arrays declared above.  */
1522
static void
1523
find_reg_equiv_invariant_const (void)
1524
{
1525
  int i;
1526
  bool invariant_p;
1527
  rtx list, insn, note, constant, x;
1528
 
1529
  for (i = FIRST_PSEUDO_REGISTER; i < reg_equiv_init_size; i++)
1530
    {
1531
      constant = NULL_RTX;
1532
      invariant_p = false;
1533
      for (list = reg_equiv_init[i]; list != NULL_RTX; list = XEXP (list, 1))
1534
        {
1535
          insn = XEXP (list, 0);
1536
          note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
1537
 
1538
          if (note == NULL_RTX)
1539
            continue;
1540
 
1541
          x = XEXP (note, 0);
1542
 
1543
          if (! function_invariant_p (x)
1544
              || ! flag_pic
1545
              /* A function invariant is often CONSTANT_P but may
1546
                 include a register.  We promise to only pass CONSTANT_P
1547
                 objects to LEGITIMATE_PIC_OPERAND_P.  */
1548
              || (CONSTANT_P (x) && LEGITIMATE_PIC_OPERAND_P (x)))
1549
            {
1550
              /* It can happen that a REG_EQUIV note contains a MEM
1551
                 that is not a legitimate memory operand.  As later
1552
                 stages of the reload assume that all addresses found
1553
                 in the reg_equiv_* arrays were originally legitimate,
1554
                 we ignore such REG_EQUIV notes.  */
1555
              if (memory_operand (x, VOIDmode))
1556
                invariant_p = MEM_READONLY_P (x);
1557
              else if (function_invariant_p (x))
1558
                {
1559
                  if (GET_CODE (x) == PLUS
1560
                      || x == frame_pointer_rtx || x == arg_pointer_rtx)
1561
                    invariant_p = true;
1562
                  else
1563
                    constant = x;
1564
                }
1565
            }
1566
        }
1567
      ira_reg_equiv_invariant_p[i] = invariant_p;
1568
      ira_reg_equiv_const[i] = constant;
1569
    }
1570
}
1571
 
1572
 
1573
 
1574
/* Vector of substitutions of register numbers,
1575
   used to map pseudo regs into hardware regs.
1576
   This is set up as a result of register allocation.
1577
   Element N is the hard reg assigned to pseudo reg N,
1578
   or is -1 if no hard reg was assigned.
1579
   If N is a hard reg number, element N is N.  */
1580
short *reg_renumber;
1581
 
1582
/* Set up REG_RENUMBER and CALLER_SAVE_NEEDED (used by reload) from
1583
   the allocation found by IRA.  */
1584
static void
1585
setup_reg_renumber (void)
1586
{
1587
  int regno, hard_regno;
1588
  ira_allocno_t a;
1589
  ira_allocno_iterator ai;
1590
 
1591
  caller_save_needed = 0;
1592
  FOR_EACH_ALLOCNO (a, ai)
1593
    {
1594
      /* There are no caps at this point.  */
1595
      ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
1596
      if (! ALLOCNO_ASSIGNED_P (a))
1597
        /* It can happen if A is not referenced but partially anticipated
1598
           somewhere in a region.  */
1599
        ALLOCNO_ASSIGNED_P (a) = true;
1600
      ira_free_allocno_updated_costs (a);
1601
      hard_regno = ALLOCNO_HARD_REGNO (a);
1602
      regno = (int) REGNO (ALLOCNO_REG (a));
1603
      reg_renumber[regno] = (hard_regno < 0 ? -1 : hard_regno);
1604
      if (hard_regno >= 0 && ALLOCNO_CALLS_CROSSED_NUM (a) != 0
1605
          && ! ira_hard_reg_not_in_set_p (hard_regno, ALLOCNO_MODE (a),
1606
                                          call_used_reg_set))
1607
        {
1608
          ira_assert (!optimize || flag_caller_saves
1609
                      || regno >= ira_reg_equiv_len
1610
                      || ira_reg_equiv_const[regno]
1611
                      || ira_reg_equiv_invariant_p[regno]);
1612
          caller_save_needed = 1;
1613
        }
1614
    }
1615
}
1616
 
1617
/* Set up allocno assignment flags for further allocation
1618
   improvements.  */
1619
static void
1620
setup_allocno_assignment_flags (void)
1621
{
1622
  int hard_regno;
1623
  ira_allocno_t a;
1624
  ira_allocno_iterator ai;
1625
 
1626
  FOR_EACH_ALLOCNO (a, ai)
1627
    {
1628
      if (! ALLOCNO_ASSIGNED_P (a))
1629
        /* It can happen if A is not referenced but partially anticipated
1630
           somewhere in a region.  */
1631
        ira_free_allocno_updated_costs (a);
1632
      hard_regno = ALLOCNO_HARD_REGNO (a);
1633
      /* Don't assign hard registers to allocnos which are destination
1634
         of removed store at the end of loop.  It has no sense to keep
1635
         the same value in different hard registers.  It is also
1636
         impossible to assign hard registers correctly to such
1637
         allocnos because the cost info and info about intersected
1638
         calls are incorrect for them.  */
1639
      ALLOCNO_ASSIGNED_P (a) = (hard_regno >= 0
1640
                                || ALLOCNO_MEM_OPTIMIZED_DEST_P (a)
1641
                                || (ALLOCNO_MEMORY_COST (a)
1642
                                    - ALLOCNO_COVER_CLASS_COST (a)) < 0);
1643
      ira_assert (hard_regno < 0
1644
                  || ! ira_hard_reg_not_in_set_p (hard_regno, ALLOCNO_MODE (a),
1645
                                                  reg_class_contents
1646
                                                  [ALLOCNO_COVER_CLASS (a)]));
1647
    }
1648
}
1649
 
1650
/* Evaluate overall allocation cost and the costs for using hard
1651
   registers and memory for allocnos.  */
1652
static void
1653
calculate_allocation_cost (void)
1654
{
1655
  int hard_regno, cost;
1656
  ira_allocno_t a;
1657
  ira_allocno_iterator ai;
1658
 
1659
  ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
1660
  FOR_EACH_ALLOCNO (a, ai)
1661
    {
1662
      hard_regno = ALLOCNO_HARD_REGNO (a);
1663
      ira_assert (hard_regno < 0
1664
                  || ! ira_hard_reg_not_in_set_p
1665
                       (hard_regno, ALLOCNO_MODE (a),
1666
                        reg_class_contents[ALLOCNO_COVER_CLASS (a)]));
1667
      if (hard_regno < 0)
1668
        {
1669
          cost = ALLOCNO_MEMORY_COST (a);
1670
          ira_mem_cost += cost;
1671
        }
1672
      else if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
1673
        {
1674
          cost = (ALLOCNO_HARD_REG_COSTS (a)
1675
                  [ira_class_hard_reg_index
1676
                   [ALLOCNO_COVER_CLASS (a)][hard_regno]]);
1677
          ira_reg_cost += cost;
1678
        }
1679
      else
1680
        {
1681
          cost = ALLOCNO_COVER_CLASS_COST (a);
1682
          ira_reg_cost += cost;
1683
        }
1684
      ira_overall_cost += cost;
1685
    }
1686
 
1687
  if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
1688
    {
1689
      fprintf (ira_dump_file,
1690
               "+++Costs: overall %d, reg %d, mem %d, ld %d, st %d, move %d\n",
1691
               ira_overall_cost, ira_reg_cost, ira_mem_cost,
1692
               ira_load_cost, ira_store_cost, ira_shuffle_cost);
1693
      fprintf (ira_dump_file, "+++       move loops %d, new jumps %d\n",
1694
               ira_move_loops_num, ira_additional_jumps_num);
1695
    }
1696
 
1697
}
1698
 
1699
#ifdef ENABLE_IRA_CHECKING
1700
/* Check the correctness of the allocation.  We do need this because
1701
   of complicated code to transform more one region internal
1702
   representation into one region representation.  */
1703
static void
1704
check_allocation (void)
1705
{
1706
  ira_allocno_t a, conflict_a;
1707
  int hard_regno, conflict_hard_regno, nregs, conflict_nregs;
1708
  ira_allocno_conflict_iterator aci;
1709
  ira_allocno_iterator ai;
1710
 
1711
  FOR_EACH_ALLOCNO (a, ai)
1712
    {
1713
      if (ALLOCNO_CAP_MEMBER (a) != NULL
1714
          || (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
1715
        continue;
1716
      nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
1717
      FOR_EACH_ALLOCNO_CONFLICT (a, conflict_a, aci)
1718
        if ((conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a)) >= 0)
1719
          {
1720
            conflict_nregs
1721
              = (hard_regno_nregs
1722
                 [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
1723
            if ((conflict_hard_regno <= hard_regno
1724
                 && hard_regno < conflict_hard_regno + conflict_nregs)
1725
                || (hard_regno <= conflict_hard_regno
1726
                    && conflict_hard_regno < hard_regno + nregs))
1727
              {
1728
                fprintf (stderr, "bad allocation for %d and %d\n",
1729
                         ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
1730
                gcc_unreachable ();
1731
              }
1732
          }
1733
    }
1734
}
1735
#endif
1736
 
1737
/* Fix values of array REG_EQUIV_INIT after live range splitting done
1738
   by IRA.  */
1739
static void
1740
fix_reg_equiv_init (void)
1741
{
1742
  int max_regno = max_reg_num ();
1743
  int i, new_regno;
1744
  rtx x, prev, next, insn, set;
1745
 
1746
  if (reg_equiv_init_size < max_regno)
1747
    {
1748
      reg_equiv_init
1749
        = (rtx *) ggc_realloc (reg_equiv_init, max_regno * sizeof (rtx));
1750
      while (reg_equiv_init_size < max_regno)
1751
        reg_equiv_init[reg_equiv_init_size++] = NULL_RTX;
1752
      for (i = FIRST_PSEUDO_REGISTER; i < reg_equiv_init_size; i++)
1753
        for (prev = NULL_RTX, x = reg_equiv_init[i]; x != NULL_RTX; x = next)
1754
          {
1755
            next = XEXP (x, 1);
1756
            insn = XEXP (x, 0);
1757
            set = single_set (insn);
1758
            ira_assert (set != NULL_RTX
1759
                        && (REG_P (SET_DEST (set)) || REG_P (SET_SRC (set))));
1760
            if (REG_P (SET_DEST (set))
1761
                && ((int) REGNO (SET_DEST (set)) == i
1762
                    || (int) ORIGINAL_REGNO (SET_DEST (set)) == i))
1763
              new_regno = REGNO (SET_DEST (set));
1764
            else if (REG_P (SET_SRC (set))
1765
                     && ((int) REGNO (SET_SRC (set)) == i
1766
                         || (int) ORIGINAL_REGNO (SET_SRC (set)) == i))
1767
              new_regno = REGNO (SET_SRC (set));
1768
            else
1769
              gcc_unreachable ();
1770
            if (new_regno == i)
1771
              prev = x;
1772
            else
1773
              {
1774
                if (prev == NULL_RTX)
1775
                  reg_equiv_init[i] = next;
1776
                else
1777
                  XEXP (prev, 1) = next;
1778
                XEXP (x, 1) = reg_equiv_init[new_regno];
1779
                reg_equiv_init[new_regno] = x;
1780
              }
1781
          }
1782
    }
1783
}
1784
 
1785
#ifdef ENABLE_IRA_CHECKING
1786
/* Print redundant memory-memory copies.  */
1787
static void
1788
print_redundant_copies (void)
1789
{
1790
  int hard_regno;
1791
  ira_allocno_t a;
1792
  ira_copy_t cp, next_cp;
1793
  ira_allocno_iterator ai;
1794
 
1795
  FOR_EACH_ALLOCNO (a, ai)
1796
    {
1797
      if (ALLOCNO_CAP_MEMBER (a) != NULL)
1798
        /* It is a cap. */
1799
        continue;
1800
      hard_regno = ALLOCNO_HARD_REGNO (a);
1801
      if (hard_regno >= 0)
1802
        continue;
1803
      for (cp = ALLOCNO_COPIES (a); cp != NULL; cp = next_cp)
1804
        if (cp->first == a)
1805
          next_cp = cp->next_first_allocno_copy;
1806
        else
1807
          {
1808
            next_cp = cp->next_second_allocno_copy;
1809
            if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL
1810
                && cp->insn != NULL_RTX
1811
                && ALLOCNO_HARD_REGNO (cp->first) == hard_regno)
1812
              fprintf (ira_dump_file,
1813
                       "        Redundant move from %d(freq %d):%d\n",
1814
                       INSN_UID (cp->insn), cp->freq, hard_regno);
1815
          }
1816
    }
1817
}
1818
#endif
1819
 
1820
/* Setup preferred and alternative classes for new pseudo-registers
1821
   created by IRA starting with START.  */
1822
static void
1823
setup_preferred_alternate_classes_for_new_pseudos (int start)
1824
{
1825
  int i, old_regno;
1826
  int max_regno = max_reg_num ();
1827
 
1828
  for (i = start; i < max_regno; i++)
1829
    {
1830
      old_regno = ORIGINAL_REGNO (regno_reg_rtx[i]);
1831
      ira_assert (i != old_regno);
1832
      setup_reg_classes (i, reg_preferred_class (old_regno),
1833
                         reg_alternate_class (old_regno),
1834
                         reg_cover_class (old_regno));
1835
      if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
1836
        fprintf (ira_dump_file,
1837
                 "    New r%d: setting preferred %s, alternative %s\n",
1838
                 i, reg_class_names[reg_preferred_class (old_regno)],
1839
                 reg_class_names[reg_alternate_class (old_regno)]);
1840
    }
1841
}
1842
 
1843
 
1844
 
1845
/* Regional allocation can create new pseudo-registers.  This function
1846
   expands some arrays for pseudo-registers.  */
1847
static void
1848
expand_reg_info (int old_size)
1849
{
1850
  int i;
1851
  int size = max_reg_num ();
1852
 
1853
  resize_reg_info ();
1854
  for (i = old_size; i < size; i++)
1855
    setup_reg_classes (i, GENERAL_REGS, ALL_REGS, GENERAL_REGS);
1856
}
1857
 
1858
/* Return TRUE if there is too high register pressure in the function.
1859
   It is used to decide when stack slot sharing is worth to do.  */
1860
static bool
1861
too_high_register_pressure_p (void)
1862
{
1863
  int i;
1864
  enum reg_class cover_class;
1865
 
1866
  for (i = 0; i < ira_reg_class_cover_size; i++)
1867
    {
1868
      cover_class = ira_reg_class_cover[i];
1869
      if (ira_loop_tree_root->reg_pressure[cover_class] > 10000)
1870
        return true;
1871
    }
1872
  return false;
1873
}
1874
 
1875
 
1876
 
1877
/* Indicate that hard register number FROM was eliminated and replaced with
1878
   an offset from hard register number TO.  The status of hard registers live
1879
   at the start of a basic block is updated by replacing a use of FROM with
1880
   a use of TO.  */
1881
 
1882
void
1883
mark_elimination (int from, int to)
1884
{
1885
  basic_block bb;
1886
 
1887
  FOR_EACH_BB (bb)
1888
    {
1889
      /* We don't use LIVE info in IRA.  */
1890
      regset r = DF_LR_IN (bb);
1891
 
1892
      if (REGNO_REG_SET_P (r, from))
1893
        {
1894
          CLEAR_REGNO_REG_SET (r, from);
1895
          SET_REGNO_REG_SET (r, to);
1896
        }
1897
    }
1898
}
1899
 
1900
 
1901
 
1902
struct equivalence
1903
{
1904
  /* Set when a REG_EQUIV note is found or created.  Use to
1905
     keep track of what memory accesses might be created later,
1906
     e.g. by reload.  */
1907
  rtx replacement;
1908
  rtx *src_p;
1909
  /* The list of each instruction which initializes this register.  */
1910
  rtx init_insns;
1911
  /* Loop depth is used to recognize equivalences which appear
1912
     to be present within the same loop (or in an inner loop).  */
1913
  int loop_depth;
1914
  /* Nonzero if this had a preexisting REG_EQUIV note.  */
1915
  int is_arg_equivalence;
1916
  /* Set when an attempt should be made to replace a register
1917
     with the associated src_p entry.  */
1918
  char replace;
1919
};
1920
 
1921
/* reg_equiv[N] (where N is a pseudo reg number) is the equivalence
1922
   structure for that register.  */
1923
static struct equivalence *reg_equiv;
1924
 
1925
/* Used for communication between the following two functions: contains
1926
   a MEM that we wish to ensure remains unchanged.  */
1927
static rtx equiv_mem;
1928
 
1929
/* Set nonzero if EQUIV_MEM is modified.  */
1930
static int equiv_mem_modified;
1931
 
1932
/* If EQUIV_MEM is modified by modifying DEST, indicate that it is modified.
1933
   Called via note_stores.  */
1934
static void
1935
validate_equiv_mem_from_store (rtx dest, const_rtx set ATTRIBUTE_UNUSED,
1936
                               void *data ATTRIBUTE_UNUSED)
1937
{
1938
  if ((REG_P (dest)
1939
       && reg_overlap_mentioned_p (dest, equiv_mem))
1940
      || (MEM_P (dest)
1941
          && true_dependence (dest, VOIDmode, equiv_mem, rtx_varies_p)))
1942
    equiv_mem_modified = 1;
1943
}
1944
 
1945
/* Verify that no store between START and the death of REG invalidates
1946
   MEMREF.  MEMREF is invalidated by modifying a register used in MEMREF,
1947
   by storing into an overlapping memory location, or with a non-const
1948
   CALL_INSN.
1949
 
1950
   Return 1 if MEMREF remains valid.  */
1951
static int
1952
validate_equiv_mem (rtx start, rtx reg, rtx memref)
1953
{
1954
  rtx insn;
1955
  rtx note;
1956
 
1957
  equiv_mem = memref;
1958
  equiv_mem_modified = 0;
1959
 
1960
  /* If the memory reference has side effects or is volatile, it isn't a
1961
     valid equivalence.  */
1962
  if (side_effects_p (memref))
1963
    return 0;
1964
 
1965
  for (insn = start; insn && ! equiv_mem_modified; insn = NEXT_INSN (insn))
1966
    {
1967
      if (! INSN_P (insn))
1968
        continue;
1969
 
1970
      if (find_reg_note (insn, REG_DEAD, reg))
1971
        return 1;
1972
 
1973
      if (CALL_P (insn) && ! MEM_READONLY_P (memref)
1974
          && ! RTL_CONST_OR_PURE_CALL_P (insn))
1975
        return 0;
1976
 
1977
      note_stores (PATTERN (insn), validate_equiv_mem_from_store, NULL);
1978
 
1979
      /* If a register mentioned in MEMREF is modified via an
1980
         auto-increment, we lose the equivalence.  Do the same if one
1981
         dies; although we could extend the life, it doesn't seem worth
1982
         the trouble.  */
1983
 
1984
      for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1985
        if ((REG_NOTE_KIND (note) == REG_INC
1986
             || REG_NOTE_KIND (note) == REG_DEAD)
1987
            && REG_P (XEXP (note, 0))
1988
            && reg_overlap_mentioned_p (XEXP (note, 0), memref))
1989
          return 0;
1990
    }
1991
 
1992
  return 0;
1993
}
1994
 
1995
/* Returns zero if X is known to be invariant.  */
1996
static int
1997
equiv_init_varies_p (rtx x)
1998
{
1999
  RTX_CODE code = GET_CODE (x);
2000
  int i;
2001
  const char *fmt;
2002
 
2003
  switch (code)
2004
    {
2005
    case MEM:
2006
      return !MEM_READONLY_P (x) || equiv_init_varies_p (XEXP (x, 0));
2007
 
2008
    case CONST:
2009
    case CONST_INT:
2010
    case CONST_DOUBLE:
2011
    case CONST_FIXED:
2012
    case CONST_VECTOR:
2013
    case SYMBOL_REF:
2014
    case LABEL_REF:
2015
      return 0;
2016
 
2017
    case REG:
2018
      return reg_equiv[REGNO (x)].replace == 0 && rtx_varies_p (x, 0);
2019
 
2020
    case ASM_OPERANDS:
2021
      if (MEM_VOLATILE_P (x))
2022
        return 1;
2023
 
2024
      /* Fall through.  */
2025
 
2026
    default:
2027
      break;
2028
    }
2029
 
2030
  fmt = GET_RTX_FORMAT (code);
2031
  for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2032
    if (fmt[i] == 'e')
2033
      {
2034
        if (equiv_init_varies_p (XEXP (x, i)))
2035
          return 1;
2036
      }
2037
    else if (fmt[i] == 'E')
2038
      {
2039
        int j;
2040
        for (j = 0; j < XVECLEN (x, i); j++)
2041
          if (equiv_init_varies_p (XVECEXP (x, i, j)))
2042
            return 1;
2043
      }
2044
 
2045
  return 0;
2046
}
2047
 
2048
/* Returns nonzero if X (used to initialize register REGNO) is movable.
2049
   X is only movable if the registers it uses have equivalent initializations
2050
   which appear to be within the same loop (or in an inner loop) and movable
2051
   or if they are not candidates for local_alloc and don't vary.  */
2052
static int
2053
equiv_init_movable_p (rtx x, int regno)
2054
{
2055
  int i, j;
2056
  const char *fmt;
2057
  enum rtx_code code = GET_CODE (x);
2058
 
2059
  switch (code)
2060
    {
2061
    case SET:
2062
      return equiv_init_movable_p (SET_SRC (x), regno);
2063
 
2064
    case CC0:
2065
    case CLOBBER:
2066
      return 0;
2067
 
2068
    case PRE_INC:
2069
    case PRE_DEC:
2070
    case POST_INC:
2071
    case POST_DEC:
2072
    case PRE_MODIFY:
2073
    case POST_MODIFY:
2074
      return 0;
2075
 
2076
    case REG:
2077
      return (reg_equiv[REGNO (x)].loop_depth >= reg_equiv[regno].loop_depth
2078
              && reg_equiv[REGNO (x)].replace)
2079
             || (REG_BASIC_BLOCK (REGNO (x)) < NUM_FIXED_BLOCKS && ! rtx_varies_p (x, 0));
2080
 
2081
    case UNSPEC_VOLATILE:
2082
      return 0;
2083
 
2084
    case ASM_OPERANDS:
2085
      if (MEM_VOLATILE_P (x))
2086
        return 0;
2087
 
2088
      /* Fall through.  */
2089
 
2090
    default:
2091
      break;
2092
    }
2093
 
2094
  fmt = GET_RTX_FORMAT (code);
2095
  for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2096
    switch (fmt[i])
2097
      {
2098
      case 'e':
2099
        if (! equiv_init_movable_p (XEXP (x, i), regno))
2100
          return 0;
2101
        break;
2102
      case 'E':
2103
        for (j = XVECLEN (x, i) - 1; j >= 0; j--)
2104
          if (! equiv_init_movable_p (XVECEXP (x, i, j), regno))
2105
            return 0;
2106
        break;
2107
      }
2108
 
2109
  return 1;
2110
}
2111
 
2112
/* TRUE if X uses any registers for which reg_equiv[REGNO].replace is true.  */
2113
static int
2114
contains_replace_regs (rtx x)
2115
{
2116
  int i, j;
2117
  const char *fmt;
2118
  enum rtx_code code = GET_CODE (x);
2119
 
2120
  switch (code)
2121
    {
2122
    case CONST_INT:
2123
    case CONST:
2124
    case LABEL_REF:
2125
    case SYMBOL_REF:
2126
    case CONST_DOUBLE:
2127
    case CONST_FIXED:
2128
    case CONST_VECTOR:
2129
    case PC:
2130
    case CC0:
2131
    case HIGH:
2132
      return 0;
2133
 
2134
    case REG:
2135
      return reg_equiv[REGNO (x)].replace;
2136
 
2137
    default:
2138
      break;
2139
    }
2140
 
2141
  fmt = GET_RTX_FORMAT (code);
2142
  for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2143
    switch (fmt[i])
2144
      {
2145
      case 'e':
2146
        if (contains_replace_regs (XEXP (x, i)))
2147
          return 1;
2148
        break;
2149
      case 'E':
2150
        for (j = XVECLEN (x, i) - 1; j >= 0; j--)
2151
          if (contains_replace_regs (XVECEXP (x, i, j)))
2152
            return 1;
2153
        break;
2154
      }
2155
 
2156
  return 0;
2157
}
2158
 
2159
/* TRUE if X references a memory location that would be affected by a store
2160
   to MEMREF.  */
2161
static int
2162
memref_referenced_p (rtx memref, rtx x)
2163
{
2164
  int i, j;
2165
  const char *fmt;
2166
  enum rtx_code code = GET_CODE (x);
2167
 
2168
  switch (code)
2169
    {
2170
    case CONST_INT:
2171
    case CONST:
2172
    case LABEL_REF:
2173
    case SYMBOL_REF:
2174
    case CONST_DOUBLE:
2175
    case CONST_FIXED:
2176
    case CONST_VECTOR:
2177
    case PC:
2178
    case CC0:
2179
    case HIGH:
2180
    case LO_SUM:
2181
      return 0;
2182
 
2183
    case REG:
2184
      return (reg_equiv[REGNO (x)].replacement
2185
              && memref_referenced_p (memref,
2186
                                      reg_equiv[REGNO (x)].replacement));
2187
 
2188
    case MEM:
2189
      if (true_dependence (memref, VOIDmode, x, rtx_varies_p))
2190
        return 1;
2191
      break;
2192
 
2193
    case SET:
2194
      /* If we are setting a MEM, it doesn't count (its address does), but any
2195
         other SET_DEST that has a MEM in it is referencing the MEM.  */
2196
      if (MEM_P (SET_DEST (x)))
2197
        {
2198
          if (memref_referenced_p (memref, XEXP (SET_DEST (x), 0)))
2199
            return 1;
2200
        }
2201
      else if (memref_referenced_p (memref, SET_DEST (x)))
2202
        return 1;
2203
 
2204
      return memref_referenced_p (memref, SET_SRC (x));
2205
 
2206
    default:
2207
      break;
2208
    }
2209
 
2210
  fmt = GET_RTX_FORMAT (code);
2211
  for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2212
    switch (fmt[i])
2213
      {
2214
      case 'e':
2215
        if (memref_referenced_p (memref, XEXP (x, i)))
2216
          return 1;
2217
        break;
2218
      case 'E':
2219
        for (j = XVECLEN (x, i) - 1; j >= 0; j--)
2220
          if (memref_referenced_p (memref, XVECEXP (x, i, j)))
2221
            return 1;
2222
        break;
2223
      }
2224
 
2225
  return 0;
2226
}
2227
 
2228
/* TRUE if some insn in the range (START, END] references a memory location
2229
   that would be affected by a store to MEMREF.  */
2230
static int
2231
memref_used_between_p (rtx memref, rtx start, rtx end)
2232
{
2233
  rtx insn;
2234
 
2235
  for (insn = NEXT_INSN (start); insn != NEXT_INSN (end);
2236
       insn = NEXT_INSN (insn))
2237
    {
2238
      if (!NONDEBUG_INSN_P (insn))
2239
        continue;
2240
 
2241
      if (memref_referenced_p (memref, PATTERN (insn)))
2242
        return 1;
2243
 
2244
      /* Nonconst functions may access memory.  */
2245
      if (CALL_P (insn) && (! RTL_CONST_CALL_P (insn)))
2246
        return 1;
2247
    }
2248
 
2249
  return 0;
2250
}
2251
 
2252
/* Mark REG as having no known equivalence.
2253
   Some instructions might have been processed before and furnished
2254
   with REG_EQUIV notes for this register; these notes will have to be
2255
   removed.
2256
   STORE is the piece of RTL that does the non-constant / conflicting
2257
   assignment - a SET, CLOBBER or REG_INC note.  It is currently not used,
2258
   but needs to be there because this function is called from note_stores.  */
2259
static void
2260
no_equiv (rtx reg, const_rtx store ATTRIBUTE_UNUSED, void *data ATTRIBUTE_UNUSED)
2261
{
2262
  int regno;
2263
  rtx list;
2264
 
2265
  if (!REG_P (reg))
2266
    return;
2267
  regno = REGNO (reg);
2268
  list = reg_equiv[regno].init_insns;
2269
  if (list == const0_rtx)
2270
    return;
2271
  reg_equiv[regno].init_insns = const0_rtx;
2272
  reg_equiv[regno].replacement = NULL_RTX;
2273
  /* This doesn't matter for equivalences made for argument registers, we
2274
     should keep their initialization insns.  */
2275
  if (reg_equiv[regno].is_arg_equivalence)
2276
    return;
2277
  reg_equiv_init[regno] = NULL_RTX;
2278
  for (; list; list =  XEXP (list, 1))
2279
    {
2280
      rtx insn = XEXP (list, 0);
2281
      remove_note (insn, find_reg_note (insn, REG_EQUIV, NULL_RTX));
2282
    }
2283
}
2284
 
2285
/* In DEBUG_INSN location adjust REGs from CLEARED_REGS bitmap to the
2286
   equivalent replacement.  */
2287
 
2288
static rtx
2289
adjust_cleared_regs (rtx loc, const_rtx old_rtx ATTRIBUTE_UNUSED, void *data)
2290
{
2291
  if (REG_P (loc))
2292
    {
2293
      bitmap cleared_regs = (bitmap) data;
2294
      if (bitmap_bit_p (cleared_regs, REGNO (loc)))
2295
        return simplify_replace_fn_rtx (*reg_equiv[REGNO (loc)].src_p,
2296
                                        NULL_RTX, adjust_cleared_regs, data);
2297
    }
2298
  return NULL_RTX;
2299
}
2300
 
2301
/* Nonzero if we recorded an equivalence for a LABEL_REF.  */
2302
static int recorded_label_ref;
2303
 
2304
/* Find registers that are equivalent to a single value throughout the
2305
   compilation (either because they can be referenced in memory or are set once
2306
   from a single constant).  Lower their priority for a register.
2307
 
2308
   If such a register is only referenced once, try substituting its value
2309
   into the using insn.  If it succeeds, we can eliminate the register
2310
   completely.
2311
 
2312
   Initialize the REG_EQUIV_INIT array of initializing insns.
2313
 
2314
   Return non-zero if jump label rebuilding should be done.  */
2315
static int
2316
update_equiv_regs (void)
2317
{
2318
  rtx insn;
2319
  basic_block bb;
2320
  int loop_depth;
2321
  bitmap cleared_regs;
2322
 
2323
  /* We need to keep track of whether or not we recorded a LABEL_REF so
2324
     that we know if the jump optimizer needs to be rerun.  */
2325
  recorded_label_ref = 0;
2326
 
2327
  reg_equiv = XCNEWVEC (struct equivalence, max_regno);
2328
  reg_equiv_init = GGC_CNEWVEC (rtx, max_regno);
2329
  reg_equiv_init_size = max_regno;
2330
 
2331
  init_alias_analysis ();
2332
 
2333
  /* Scan the insns and find which registers have equivalences.  Do this
2334
     in a separate scan of the insns because (due to -fcse-follow-jumps)
2335
     a register can be set below its use.  */
2336
  FOR_EACH_BB (bb)
2337
    {
2338
      loop_depth = bb->loop_depth;
2339
 
2340
      for (insn = BB_HEAD (bb);
2341
           insn != NEXT_INSN (BB_END (bb));
2342
           insn = NEXT_INSN (insn))
2343
        {
2344
          rtx note;
2345
          rtx set;
2346
          rtx dest, src;
2347
          int regno;
2348
 
2349
          if (! INSN_P (insn))
2350
            continue;
2351
 
2352
          for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2353
            if (REG_NOTE_KIND (note) == REG_INC)
2354
              no_equiv (XEXP (note, 0), note, NULL);
2355
 
2356
          set = single_set (insn);
2357
 
2358
          /* If this insn contains more (or less) than a single SET,
2359
             only mark all destinations as having no known equivalence.  */
2360
          if (set == 0)
2361
            {
2362
              note_stores (PATTERN (insn), no_equiv, NULL);
2363
              continue;
2364
            }
2365
          else if (GET_CODE (PATTERN (insn)) == PARALLEL)
2366
            {
2367
              int i;
2368
 
2369
              for (i = XVECLEN (PATTERN (insn), 0) - 1; i >= 0; i--)
2370
                {
2371
                  rtx part = XVECEXP (PATTERN (insn), 0, i);
2372
                  if (part != set)
2373
                    note_stores (part, no_equiv, NULL);
2374
                }
2375
            }
2376
 
2377
          dest = SET_DEST (set);
2378
          src = SET_SRC (set);
2379
 
2380
          /* See if this is setting up the equivalence between an argument
2381
             register and its stack slot.  */
2382
          note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
2383
          if (note)
2384
            {
2385
              gcc_assert (REG_P (dest));
2386
              regno = REGNO (dest);
2387
 
2388
              /* Note that we don't want to clear reg_equiv_init even if there
2389
                 are multiple sets of this register.  */
2390
              reg_equiv[regno].is_arg_equivalence = 1;
2391
 
2392
              /* Record for reload that this is an equivalencing insn.  */
2393
              if (rtx_equal_p (src, XEXP (note, 0)))
2394
                reg_equiv_init[regno]
2395
                  = gen_rtx_INSN_LIST (VOIDmode, insn, reg_equiv_init[regno]);
2396
 
2397
              /* Continue normally in case this is a candidate for
2398
                 replacements.  */
2399
            }
2400
 
2401
          if (!optimize)
2402
            continue;
2403
 
2404
          /* We only handle the case of a pseudo register being set
2405
             once, or always to the same value.  */
2406
          /* ??? The mn10200 port breaks if we add equivalences for
2407
             values that need an ADDRESS_REGS register and set them equivalent
2408
             to a MEM of a pseudo.  The actual problem is in the over-conservative
2409
             handling of INPADDR_ADDRESS / INPUT_ADDRESS / INPUT triples in
2410
             calculate_needs, but we traditionally work around this problem
2411
             here by rejecting equivalences when the destination is in a register
2412
             that's likely spilled.  This is fragile, of course, since the
2413
             preferred class of a pseudo depends on all instructions that set
2414
             or use it.  */
2415
 
2416
          if (!REG_P (dest)
2417
              || (regno = REGNO (dest)) < FIRST_PSEUDO_REGISTER
2418
              || reg_equiv[regno].init_insns == const0_rtx
2419
              || (CLASS_LIKELY_SPILLED_P (reg_preferred_class (regno))
2420
                  && MEM_P (src) && ! reg_equiv[regno].is_arg_equivalence))
2421
            {
2422
              /* This might be setting a SUBREG of a pseudo, a pseudo that is
2423
                 also set somewhere else to a constant.  */
2424
              note_stores (set, no_equiv, NULL);
2425
              continue;
2426
            }
2427
 
2428
          note = find_reg_note (insn, REG_EQUAL, NULL_RTX);
2429
 
2430
          /* cse sometimes generates function invariants, but doesn't put a
2431
             REG_EQUAL note on the insn.  Since this note would be redundant,
2432
             there's no point creating it earlier than here.  */
2433
          if (! note && ! rtx_varies_p (src, 0))
2434
            note = set_unique_reg_note (insn, REG_EQUAL, copy_rtx (src));
2435
 
2436
          /* Don't bother considering a REG_EQUAL note containing an EXPR_LIST
2437
             since it represents a function call */
2438
          if (note && GET_CODE (XEXP (note, 0)) == EXPR_LIST)
2439
            note = NULL_RTX;
2440
 
2441
          if (DF_REG_DEF_COUNT (regno) != 1
2442
              && (! note
2443
                  || rtx_varies_p (XEXP (note, 0), 0)
2444
                  || (reg_equiv[regno].replacement
2445
                      && ! rtx_equal_p (XEXP (note, 0),
2446
                                        reg_equiv[regno].replacement))))
2447
            {
2448
              no_equiv (dest, set, NULL);
2449
              continue;
2450
            }
2451
          /* Record this insn as initializing this register.  */
2452
          reg_equiv[regno].init_insns
2453
            = gen_rtx_INSN_LIST (VOIDmode, insn, reg_equiv[regno].init_insns);
2454
 
2455
          /* If this register is known to be equal to a constant, record that
2456
             it is always equivalent to the constant.  */
2457
          if (DF_REG_DEF_COUNT (regno) == 1
2458
              && note && ! rtx_varies_p (XEXP (note, 0), 0))
2459
            {
2460
              rtx note_value = XEXP (note, 0);
2461
              remove_note (insn, note);
2462
              set_unique_reg_note (insn, REG_EQUIV, note_value);
2463
            }
2464
 
2465
          /* If this insn introduces a "constant" register, decrease the priority
2466
             of that register.  Record this insn if the register is only used once
2467
             more and the equivalence value is the same as our source.
2468
 
2469
             The latter condition is checked for two reasons:  First, it is an
2470
             indication that it may be more efficient to actually emit the insn
2471
             as written (if no registers are available, reload will substitute
2472
             the equivalence).  Secondly, it avoids problems with any registers
2473
             dying in this insn whose death notes would be missed.
2474
 
2475
             If we don't have a REG_EQUIV note, see if this insn is loading
2476
             a register used only in one basic block from a MEM.  If so, and the
2477
             MEM remains unchanged for the life of the register, add a REG_EQUIV
2478
             note.  */
2479
 
2480
          note = find_reg_note (insn, REG_EQUIV, NULL_RTX);
2481
 
2482
          if (note == 0 && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
2483
              && MEM_P (SET_SRC (set))
2484
              && validate_equiv_mem (insn, dest, SET_SRC (set)))
2485
            note = set_unique_reg_note (insn, REG_EQUIV, copy_rtx (SET_SRC (set)));
2486
 
2487
          if (note)
2488
            {
2489
              int regno = REGNO (dest);
2490
              rtx x = XEXP (note, 0);
2491
 
2492
              /* If we haven't done so, record for reload that this is an
2493
                 equivalencing insn.  */
2494
              if (!reg_equiv[regno].is_arg_equivalence)
2495
                reg_equiv_init[regno]
2496
                  = gen_rtx_INSN_LIST (VOIDmode, insn, reg_equiv_init[regno]);
2497
 
2498
              /* Record whether or not we created a REG_EQUIV note for a LABEL_REF.
2499
                 We might end up substituting the LABEL_REF for uses of the
2500
                 pseudo here or later.  That kind of transformation may turn an
2501
                 indirect jump into a direct jump, in which case we must rerun the
2502
                 jump optimizer to ensure that the JUMP_LABEL fields are valid.  */
2503
              if (GET_CODE (x) == LABEL_REF
2504
                  || (GET_CODE (x) == CONST
2505
                      && GET_CODE (XEXP (x, 0)) == PLUS
2506
                      && (GET_CODE (XEXP (XEXP (x, 0), 0)) == LABEL_REF)))
2507
                recorded_label_ref = 1;
2508
 
2509
              reg_equiv[regno].replacement = x;
2510
              reg_equiv[regno].src_p = &SET_SRC (set);
2511
              reg_equiv[regno].loop_depth = loop_depth;
2512
 
2513
              /* Don't mess with things live during setjmp.  */
2514
              if (REG_LIVE_LENGTH (regno) >= 0 && optimize)
2515
                {
2516
                  /* Note that the statement below does not affect the priority
2517
                     in local-alloc!  */
2518
                  REG_LIVE_LENGTH (regno) *= 2;
2519
 
2520
                  /* If the register is referenced exactly twice, meaning it is
2521
                     set once and used once, indicate that the reference may be
2522
                     replaced by the equivalence we computed above.  Do this
2523
                     even if the register is only used in one block so that
2524
                     dependencies can be handled where the last register is
2525
                     used in a different block (i.e. HIGH / LO_SUM sequences)
2526
                     and to reduce the number of registers alive across
2527
                     calls.  */
2528
 
2529
                  if (REG_N_REFS (regno) == 2
2530
                      && (rtx_equal_p (x, src)
2531
                          || ! equiv_init_varies_p (src))
2532
                      && NONJUMP_INSN_P (insn)
2533
                      && equiv_init_movable_p (PATTERN (insn), regno))
2534
                    reg_equiv[regno].replace = 1;
2535
                }
2536
            }
2537
        }
2538
    }
2539
 
2540
  if (!optimize)
2541
    goto out;
2542
 
2543
  /* A second pass, to gather additional equivalences with memory.  This needs
2544
     to be done after we know which registers we are going to replace.  */
2545
 
2546
  for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
2547
    {
2548
      rtx set, src, dest;
2549
      unsigned regno;
2550
 
2551
      if (! INSN_P (insn))
2552
        continue;
2553
 
2554
      set = single_set (insn);
2555
      if (! set)
2556
        continue;
2557
 
2558
      dest = SET_DEST (set);
2559
      src = SET_SRC (set);
2560
 
2561
      /* If this sets a MEM to the contents of a REG that is only used
2562
         in a single basic block, see if the register is always equivalent
2563
         to that memory location and if moving the store from INSN to the
2564
         insn that set REG is safe.  If so, put a REG_EQUIV note on the
2565
         initializing insn.
2566
 
2567
         Don't add a REG_EQUIV note if the insn already has one.  The existing
2568
         REG_EQUIV is likely more useful than the one we are adding.
2569
 
2570
         If one of the regs in the address has reg_equiv[REGNO].replace set,
2571
         then we can't add this REG_EQUIV note.  The reg_equiv[REGNO].replace
2572
         optimization may move the set of this register immediately before
2573
         insn, which puts it after reg_equiv[REGNO].init_insns, and hence
2574
         the mention in the REG_EQUIV note would be to an uninitialized
2575
         pseudo.  */
2576
 
2577
      if (MEM_P (dest) && REG_P (src)
2578
          && (regno = REGNO (src)) >= FIRST_PSEUDO_REGISTER
2579
          && REG_BASIC_BLOCK (regno) >= NUM_FIXED_BLOCKS
2580
          && DF_REG_DEF_COUNT (regno) == 1
2581
          && reg_equiv[regno].init_insns != 0
2582
          && reg_equiv[regno].init_insns != const0_rtx
2583
          && ! find_reg_note (XEXP (reg_equiv[regno].init_insns, 0),
2584
                              REG_EQUIV, NULL_RTX)
2585
          && ! contains_replace_regs (XEXP (dest, 0)))
2586
        {
2587
          rtx init_insn = XEXP (reg_equiv[regno].init_insns, 0);
2588
          if (validate_equiv_mem (init_insn, src, dest)
2589
              && ! memref_used_between_p (dest, init_insn, insn)
2590
              /* Attaching a REG_EQUIV note will fail if INIT_INSN has
2591
                 multiple sets.  */
2592
              && set_unique_reg_note (init_insn, REG_EQUIV, copy_rtx (dest)))
2593
            {
2594
              /* This insn makes the equivalence, not the one initializing
2595
                 the register.  */
2596
              reg_equiv_init[regno]
2597
                = gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX);
2598
              df_notes_rescan (init_insn);
2599
            }
2600
        }
2601
    }
2602
 
2603
  cleared_regs = BITMAP_ALLOC (NULL);
2604
  /* Now scan all regs killed in an insn to see if any of them are
2605
     registers only used that once.  If so, see if we can replace the
2606
     reference with the equivalent form.  If we can, delete the
2607
     initializing reference and this register will go away.  If we
2608
     can't replace the reference, and the initializing reference is
2609
     within the same loop (or in an inner loop), then move the register
2610
     initialization just before the use, so that they are in the same
2611
     basic block.  */
2612
  FOR_EACH_BB_REVERSE (bb)
2613
    {
2614
      loop_depth = bb->loop_depth;
2615
      for (insn = BB_END (bb);
2616
           insn != PREV_INSN (BB_HEAD (bb));
2617
           insn = PREV_INSN (insn))
2618
        {
2619
          rtx link;
2620
 
2621
          if (! INSN_P (insn))
2622
            continue;
2623
 
2624
          /* Don't substitute into a non-local goto, this confuses CFG.  */
2625
          if (JUMP_P (insn)
2626
              && find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
2627
            continue;
2628
 
2629
          for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
2630
            {
2631
              if (REG_NOTE_KIND (link) == REG_DEAD
2632
                  /* Make sure this insn still refers to the register.  */
2633
                  && reg_mentioned_p (XEXP (link, 0), PATTERN (insn)))
2634
                {
2635
                  int regno = REGNO (XEXP (link, 0));
2636
                  rtx equiv_insn;
2637
 
2638
                  if (! reg_equiv[regno].replace
2639
                      || reg_equiv[regno].loop_depth < loop_depth)
2640
                    continue;
2641
 
2642
                  /* reg_equiv[REGNO].replace gets set only when
2643
                     REG_N_REFS[REGNO] is 2, i.e. the register is set
2644
                     once and used once.  (If it were only set, but not used,
2645
                     flow would have deleted the setting insns.)  Hence
2646
                     there can only be one insn in reg_equiv[REGNO].init_insns.  */
2647
                  gcc_assert (reg_equiv[regno].init_insns
2648
                              && !XEXP (reg_equiv[regno].init_insns, 1));
2649
                  equiv_insn = XEXP (reg_equiv[regno].init_insns, 0);
2650
 
2651
                  /* We may not move instructions that can throw, since
2652
                     that changes basic block boundaries and we are not
2653
                     prepared to adjust the CFG to match.  */
2654
                  if (can_throw_internal (equiv_insn))
2655
                    continue;
2656
 
2657
                  if (asm_noperands (PATTERN (equiv_insn)) < 0
2658
                      && validate_replace_rtx (regno_reg_rtx[regno],
2659
                                               *(reg_equiv[regno].src_p), insn))
2660
                    {
2661
                      rtx equiv_link;
2662
                      rtx last_link;
2663
                      rtx note;
2664
 
2665
                      /* Find the last note.  */
2666
                      for (last_link = link; XEXP (last_link, 1);
2667
                           last_link = XEXP (last_link, 1))
2668
                        ;
2669
 
2670
                      /* Append the REG_DEAD notes from equiv_insn.  */
2671
                      equiv_link = REG_NOTES (equiv_insn);
2672
                      while (equiv_link)
2673
                        {
2674
                          note = equiv_link;
2675
                          equiv_link = XEXP (equiv_link, 1);
2676
                          if (REG_NOTE_KIND (note) == REG_DEAD)
2677
                            {
2678
                              remove_note (equiv_insn, note);
2679
                              XEXP (last_link, 1) = note;
2680
                              XEXP (note, 1) = NULL_RTX;
2681
                              last_link = note;
2682
                            }
2683
                        }
2684
 
2685
                      remove_death (regno, insn);
2686
                      SET_REG_N_REFS (regno, 0);
2687
                      REG_FREQ (regno) = 0;
2688
                      delete_insn (equiv_insn);
2689
 
2690
                      reg_equiv[regno].init_insns
2691
                        = XEXP (reg_equiv[regno].init_insns, 1);
2692
 
2693
                      reg_equiv_init[regno] = NULL_RTX;
2694
                      bitmap_set_bit (cleared_regs, regno);
2695
                    }
2696
                  /* Move the initialization of the register to just before
2697
                     INSN.  Update the flow information.  */
2698
                  else if (prev_nondebug_insn (insn) != equiv_insn)
2699
                    {
2700
                      rtx new_insn;
2701
 
2702
                      new_insn = emit_insn_before (PATTERN (equiv_insn), insn);
2703
                      REG_NOTES (new_insn) = REG_NOTES (equiv_insn);
2704
                      REG_NOTES (equiv_insn) = 0;
2705
                      /* Rescan it to process the notes.  */
2706
                      df_insn_rescan (new_insn);
2707
 
2708
                      /* Make sure this insn is recognized before
2709
                         reload begins, otherwise
2710
                         eliminate_regs_in_insn will die.  */
2711
                      INSN_CODE (new_insn) = INSN_CODE (equiv_insn);
2712
 
2713
                      delete_insn (equiv_insn);
2714
 
2715
                      XEXP (reg_equiv[regno].init_insns, 0) = new_insn;
2716
 
2717
                      REG_BASIC_BLOCK (regno) = bb->index;
2718
                      REG_N_CALLS_CROSSED (regno) = 0;
2719
                      REG_FREQ_CALLS_CROSSED (regno) = 0;
2720
                      REG_N_THROWING_CALLS_CROSSED (regno) = 0;
2721
                      REG_LIVE_LENGTH (regno) = 2;
2722
 
2723
                      if (insn == BB_HEAD (bb))
2724
                        BB_HEAD (bb) = PREV_INSN (insn);
2725
 
2726
                      reg_equiv_init[regno]
2727
                        = gen_rtx_INSN_LIST (VOIDmode, new_insn, NULL_RTX);
2728
                      bitmap_set_bit (cleared_regs, regno);
2729
                    }
2730
                }
2731
            }
2732
        }
2733
    }
2734
 
2735
  if (!bitmap_empty_p (cleared_regs))
2736
    {
2737
      FOR_EACH_BB (bb)
2738
        {
2739
          bitmap_and_compl_into (DF_LIVE_IN (bb), cleared_regs);
2740
          bitmap_and_compl_into (DF_LIVE_OUT (bb), cleared_regs);
2741
          bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
2742
          bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
2743
        }
2744
 
2745
      /* Last pass - adjust debug insns referencing cleared regs.  */
2746
      if (MAY_HAVE_DEBUG_INSNS)
2747
        for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
2748
          if (DEBUG_INSN_P (insn))
2749
            {
2750
              rtx old_loc = INSN_VAR_LOCATION_LOC (insn);
2751
              INSN_VAR_LOCATION_LOC (insn)
2752
                = simplify_replace_fn_rtx (old_loc, NULL_RTX,
2753
                                           adjust_cleared_regs,
2754
                                           (void *) cleared_regs);
2755
              if (old_loc != INSN_VAR_LOCATION_LOC (insn))
2756
                df_insn_rescan (insn);
2757
            }
2758
    }
2759
 
2760
  BITMAP_FREE (cleared_regs);
2761
 
2762
  out:
2763
  /* Clean up.  */
2764
 
2765
  end_alias_analysis ();
2766
  free (reg_equiv);
2767
  return recorded_label_ref;
2768
}
2769
 
2770
 
2771
 
2772
/* Print chain C to FILE.  */
2773
static void
2774
print_insn_chain (FILE *file, struct insn_chain *c)
2775
{
2776
  fprintf (file, "insn=%d, ", INSN_UID(c->insn));
2777
  bitmap_print (file, &c->live_throughout, "live_throughout: ", ", ");
2778
  bitmap_print (file, &c->dead_or_set, "dead_or_set: ", "\n");
2779
}
2780
 
2781
 
2782
/* Print all reload_insn_chains to FILE.  */
2783
static void
2784
print_insn_chains (FILE *file)
2785
{
2786
  struct insn_chain *c;
2787
  for (c = reload_insn_chain; c ; c = c->next)
2788
    print_insn_chain (file, c);
2789
}
2790
 
2791
/* Return true if pseudo REGNO should be added to set live_throughout
2792
   or dead_or_set of the insn chains for reload consideration.  */
2793
static bool
2794
pseudo_for_reload_consideration_p (int regno)
2795
{
2796
  /* Consider spilled pseudos too for IRA because they still have a
2797
     chance to get hard-registers in the reload when IRA is used.  */
2798
  return (reg_renumber[regno] >= 0
2799
          || (ira_conflicts_p && flag_ira_share_spill_slots));
2800
}
2801
 
2802
/* Init LIVE_SUBREGS[ALLOCNUM] and LIVE_SUBREGS_USED[ALLOCNUM] using
2803
   REG to the number of nregs, and INIT_VALUE to get the
2804
   initialization.  ALLOCNUM need not be the regno of REG.  */
2805
static void
2806
init_live_subregs (bool init_value, sbitmap *live_subregs,
2807
                   int *live_subregs_used, int allocnum, rtx reg)
2808
{
2809
  unsigned int regno = REGNO (SUBREG_REG (reg));
2810
  int size = GET_MODE_SIZE (GET_MODE (regno_reg_rtx[regno]));
2811
 
2812
  gcc_assert (size > 0);
2813
 
2814
  /* Been there, done that.  */
2815
  if (live_subregs_used[allocnum])
2816
    return;
2817
 
2818
  /* Create a new one with zeros.  */
2819
  if (live_subregs[allocnum] == NULL)
2820
    live_subregs[allocnum] = sbitmap_alloc (size);
2821
 
2822
  /* If the entire reg was live before blasting into subregs, we need
2823
     to init all of the subregs to ones else init to 0.  */
2824
  if (init_value)
2825
    sbitmap_ones (live_subregs[allocnum]);
2826
  else
2827
    sbitmap_zero (live_subregs[allocnum]);
2828
 
2829
  /* Set the number of bits that we really want.  */
2830
  live_subregs_used[allocnum] = size;
2831
}
2832
 
2833
/* Walk the insns of the current function and build reload_insn_chain,
2834
   and record register life information.  */
2835
static void
2836
build_insn_chain (void)
2837
{
2838
  unsigned int i;
2839
  struct insn_chain **p = &reload_insn_chain;
2840
  basic_block bb;
2841
  struct insn_chain *c = NULL;
2842
  struct insn_chain *next = NULL;
2843
  bitmap live_relevant_regs = BITMAP_ALLOC (NULL);
2844
  bitmap elim_regset = BITMAP_ALLOC (NULL);
2845
  /* live_subregs is a vector used to keep accurate information about
2846
     which hardregs are live in multiword pseudos.  live_subregs and
2847
     live_subregs_used are indexed by pseudo number.  The live_subreg
2848
     entry for a particular pseudo is only used if the corresponding
2849
     element is non zero in live_subregs_used.  The value in
2850
     live_subregs_used is number of bytes that the pseudo can
2851
     occupy.  */
2852
  sbitmap *live_subregs = XCNEWVEC (sbitmap, max_regno);
2853
  int *live_subregs_used = XNEWVEC (int, max_regno);
2854
 
2855
  for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
2856
    if (TEST_HARD_REG_BIT (eliminable_regset, i))
2857
      bitmap_set_bit (elim_regset, i);
2858
  FOR_EACH_BB_REVERSE (bb)
2859
    {
2860
      bitmap_iterator bi;
2861
      rtx insn;
2862
 
2863
      CLEAR_REG_SET (live_relevant_regs);
2864
      memset (live_subregs_used, 0, max_regno * sizeof (int));
2865
 
2866
      EXECUTE_IF_SET_IN_BITMAP (DF_LR_OUT (bb), 0, i, bi)
2867
        {
2868
          if (i >= FIRST_PSEUDO_REGISTER)
2869
            break;
2870
          bitmap_set_bit (live_relevant_regs, i);
2871
        }
2872
 
2873
      EXECUTE_IF_SET_IN_BITMAP (DF_LR_OUT (bb),
2874
                                FIRST_PSEUDO_REGISTER, i, bi)
2875
        {
2876
          if (pseudo_for_reload_consideration_p (i))
2877
            bitmap_set_bit (live_relevant_regs, i);
2878
        }
2879
 
2880
      FOR_BB_INSNS_REVERSE (bb, insn)
2881
        {
2882
          if (!NOTE_P (insn) && !BARRIER_P (insn))
2883
            {
2884
              unsigned int uid = INSN_UID (insn);
2885
              df_ref *def_rec;
2886
              df_ref *use_rec;
2887
 
2888
              c = new_insn_chain ();
2889
              c->next = next;
2890
              next = c;
2891
              *p = c;
2892
              p = &c->prev;
2893
 
2894
              c->insn = insn;
2895
              c->block = bb->index;
2896
 
2897
              if (INSN_P (insn))
2898
                for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
2899
                  {
2900
                    df_ref def = *def_rec;
2901
                    unsigned int regno = DF_REF_REGNO (def);
2902
 
2903
                    /* Ignore may clobbers because these are generated
2904
                       from calls. However, every other kind of def is
2905
                       added to dead_or_set.  */
2906
                    if (!DF_REF_FLAGS_IS_SET (def, DF_REF_MAY_CLOBBER))
2907
                      {
2908
                        if (regno < FIRST_PSEUDO_REGISTER)
2909
                          {
2910
                            if (!fixed_regs[regno])
2911
                              bitmap_set_bit (&c->dead_or_set, regno);
2912
                          }
2913
                        else if (pseudo_for_reload_consideration_p (regno))
2914
                          bitmap_set_bit (&c->dead_or_set, regno);
2915
                      }
2916
 
2917
                    if ((regno < FIRST_PSEUDO_REGISTER
2918
                         || reg_renumber[regno] >= 0
2919
                         || ira_conflicts_p)
2920
                        && (!DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL)))
2921
                      {
2922
                        rtx reg = DF_REF_REG (def);
2923
 
2924
                        /* We can model subregs, but not if they are
2925
                           wrapped in ZERO_EXTRACTS.  */
2926
                        if (GET_CODE (reg) == SUBREG
2927
                            && !DF_REF_FLAGS_IS_SET (def, DF_REF_ZERO_EXTRACT))
2928
                          {
2929
                            unsigned int start = SUBREG_BYTE (reg);
2930
                            unsigned int last = start
2931
                              + GET_MODE_SIZE (GET_MODE (reg));
2932
 
2933
                            init_live_subregs
2934
                              (bitmap_bit_p (live_relevant_regs, regno),
2935
                               live_subregs, live_subregs_used, regno, reg);
2936
 
2937
                            if (!DF_REF_FLAGS_IS_SET
2938
                                (def, DF_REF_STRICT_LOW_PART))
2939
                              {
2940
                                /* Expand the range to cover entire words.
2941
                                   Bytes added here are "don't care".  */
2942
                                start
2943
                                  = start / UNITS_PER_WORD * UNITS_PER_WORD;
2944
                                last = ((last + UNITS_PER_WORD - 1)
2945
                                        / UNITS_PER_WORD * UNITS_PER_WORD);
2946
                              }
2947
 
2948
                            /* Ignore the paradoxical bits.  */
2949
                            if ((int)last > live_subregs_used[regno])
2950
                              last = live_subregs_used[regno];
2951
 
2952
                            while (start < last)
2953
                              {
2954
                                RESET_BIT (live_subregs[regno], start);
2955
                                start++;
2956
                              }
2957
 
2958
                            if (sbitmap_empty_p (live_subregs[regno]))
2959
                              {
2960
                                live_subregs_used[regno] = 0;
2961
                                bitmap_clear_bit (live_relevant_regs, regno);
2962
                              }
2963
                            else
2964
                              /* Set live_relevant_regs here because
2965
                                 that bit has to be true to get us to
2966
                                 look at the live_subregs fields.  */
2967
                              bitmap_set_bit (live_relevant_regs, regno);
2968
                          }
2969
                        else
2970
                          {
2971
                            /* DF_REF_PARTIAL is generated for
2972
                               subregs, STRICT_LOW_PART, and
2973
                               ZERO_EXTRACT.  We handle the subreg
2974
                               case above so here we have to keep from
2975
                               modeling the def as a killing def.  */
2976
                            if (!DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL))
2977
                              {
2978
                                bitmap_clear_bit (live_relevant_regs, regno);
2979
                                live_subregs_used[regno] = 0;
2980
                              }
2981
                          }
2982
                      }
2983
                  }
2984
 
2985
              bitmap_and_compl_into (live_relevant_regs, elim_regset);
2986
              bitmap_copy (&c->live_throughout, live_relevant_regs);
2987
 
2988
              if (INSN_P (insn))
2989
                for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
2990
                  {
2991
                    df_ref use = *use_rec;
2992
                    unsigned int regno = DF_REF_REGNO (use);
2993
                    rtx reg = DF_REF_REG (use);
2994
 
2995
                    /* DF_REF_READ_WRITE on a use means that this use
2996
                       is fabricated from a def that is a partial set
2997
                       to a multiword reg.  Here, we only model the
2998
                       subreg case that is not wrapped in ZERO_EXTRACT
2999
                       precisely so we do not need to look at the
3000
                       fabricated use. */
3001
                    if (DF_REF_FLAGS_IS_SET (use, DF_REF_READ_WRITE)
3002
                        && !DF_REF_FLAGS_IS_SET (use, DF_REF_ZERO_EXTRACT)
3003
                        && DF_REF_FLAGS_IS_SET (use, DF_REF_SUBREG))
3004
                      continue;
3005
 
3006
                    /* Add the last use of each var to dead_or_set.  */
3007
                    if (!bitmap_bit_p (live_relevant_regs, regno))
3008
                      {
3009
                        if (regno < FIRST_PSEUDO_REGISTER)
3010
                          {
3011
                            if (!fixed_regs[regno])
3012
                              bitmap_set_bit (&c->dead_or_set, regno);
3013
                          }
3014
                        else if (pseudo_for_reload_consideration_p (regno))
3015
                          bitmap_set_bit (&c->dead_or_set, regno);
3016
                      }
3017
 
3018
                    if (regno < FIRST_PSEUDO_REGISTER
3019
                        || pseudo_for_reload_consideration_p (regno))
3020
                      {
3021
                        if (GET_CODE (reg) == SUBREG
3022
                            && !DF_REF_FLAGS_IS_SET (use,
3023
                                                     DF_REF_SIGN_EXTRACT
3024
                                                     | DF_REF_ZERO_EXTRACT))
3025
                          {
3026
                            unsigned int start = SUBREG_BYTE (reg);
3027
                            unsigned int last = start
3028
                              + GET_MODE_SIZE (GET_MODE (reg));
3029
 
3030
                            init_live_subregs
3031
                              (bitmap_bit_p (live_relevant_regs, regno),
3032
                               live_subregs, live_subregs_used, regno, reg);
3033
 
3034
                            /* Ignore the paradoxical bits.  */
3035
                            if ((int)last > live_subregs_used[regno])
3036
                              last = live_subregs_used[regno];
3037
 
3038
                            while (start < last)
3039
                              {
3040
                                SET_BIT (live_subregs[regno], start);
3041
                                start++;
3042
                              }
3043
                          }
3044
                        else
3045
                          /* Resetting the live_subregs_used is
3046
                             effectively saying do not use the subregs
3047
                             because we are reading the whole
3048
                             pseudo.  */
3049
                          live_subregs_used[regno] = 0;
3050
                        bitmap_set_bit (live_relevant_regs, regno);
3051
                      }
3052
                  }
3053
            }
3054
        }
3055
 
3056
      /* FIXME!! The following code is a disaster.  Reload needs to see the
3057
         labels and jump tables that are just hanging out in between
3058
         the basic blocks.  See pr33676.  */
3059
      insn = BB_HEAD (bb);
3060
 
3061
      /* Skip over the barriers and cruft.  */
3062
      while (insn && (BARRIER_P (insn) || NOTE_P (insn)
3063
                      || BLOCK_FOR_INSN (insn) == bb))
3064
        insn = PREV_INSN (insn);
3065
 
3066
      /* While we add anything except barriers and notes, the focus is
3067
         to get the labels and jump tables into the
3068
         reload_insn_chain.  */
3069
      while (insn)
3070
        {
3071
          if (!NOTE_P (insn) && !BARRIER_P (insn))
3072
            {
3073
              if (BLOCK_FOR_INSN (insn))
3074
                break;
3075
 
3076
              c = new_insn_chain ();
3077
              c->next = next;
3078
              next = c;
3079
              *p = c;
3080
              p = &c->prev;
3081
 
3082
              /* The block makes no sense here, but it is what the old
3083
                 code did.  */
3084
              c->block = bb->index;
3085
              c->insn = insn;
3086
              bitmap_copy (&c->live_throughout, live_relevant_regs);
3087
            }
3088
          insn = PREV_INSN (insn);
3089
        }
3090
    }
3091
 
3092
  for (i = 0; i < (unsigned int) max_regno; i++)
3093
    if (live_subregs[i])
3094
      free (live_subregs[i]);
3095
 
3096
  reload_insn_chain = c;
3097
  *p = NULL;
3098
 
3099
  free (live_subregs);
3100
  free (live_subregs_used);
3101
  BITMAP_FREE (live_relevant_regs);
3102
  BITMAP_FREE (elim_regset);
3103
 
3104
  if (dump_file)
3105
    print_insn_chains (dump_file);
3106
}
3107
 
3108
 
3109
 
3110
/* All natural loops.  */
3111
struct loops ira_loops;
3112
 
3113
/* True if we have allocno conflicts.  It is false for non-optimized
3114
   mode or when the conflict table is too big.  */
3115
bool ira_conflicts_p;
3116
 
3117
/* This is the main entry of IRA.  */
3118
static void
3119
ira (FILE *f)
3120
{
3121
  int overall_cost_before, allocated_reg_info_size;
3122
  bool loops_p;
3123
  int max_regno_before_ira, ira_max_point_before_emit;
3124
  int rebuild_p;
3125
  int saved_flag_ira_share_spill_slots;
3126
  basic_block bb;
3127
 
3128
  timevar_push (TV_IRA);
3129
 
3130
  if (flag_caller_saves)
3131
    init_caller_save ();
3132
 
3133
  if (flag_ira_verbose < 10)
3134
    {
3135
      internal_flag_ira_verbose = flag_ira_verbose;
3136
      ira_dump_file = f;
3137
    }
3138
  else
3139
    {
3140
      internal_flag_ira_verbose = flag_ira_verbose - 10;
3141
      ira_dump_file = stderr;
3142
    }
3143
 
3144
  ira_conflicts_p = optimize > 0;
3145
  setup_prohibited_mode_move_regs ();
3146
 
3147
  df_note_add_problem ();
3148
 
3149
  if (optimize == 1)
3150
    {
3151
      df_live_add_problem ();
3152
      df_live_set_all_dirty ();
3153
    }
3154
#ifdef ENABLE_CHECKING
3155
  df->changeable_flags |= DF_VERIFY_SCHEDULED;
3156
#endif
3157
  df_analyze ();
3158
  df_clear_flags (DF_NO_INSN_RESCAN);
3159
  regstat_init_n_sets_and_refs ();
3160
  regstat_compute_ri ();
3161
 
3162
  /* If we are not optimizing, then this is the only place before
3163
     register allocation where dataflow is done.  And that is needed
3164
     to generate these warnings.  */
3165
  if (warn_clobbered)
3166
    generate_setjmp_warnings ();
3167
 
3168
  /* Determine if the current function is a leaf before running IRA
3169
     since this can impact optimizations done by the prologue and
3170
     epilogue thus changing register elimination offsets.  */
3171
  current_function_is_leaf = leaf_function_p ();
3172
 
3173
  if (resize_reg_info () && flag_ira_loop_pressure)
3174
    ira_set_pseudo_classes (ira_dump_file);
3175
 
3176
  rebuild_p = update_equiv_regs ();
3177
 
3178
#ifndef IRA_NO_OBSTACK
3179
  gcc_obstack_init (&ira_obstack);
3180
#endif
3181
  bitmap_obstack_initialize (&ira_bitmap_obstack);
3182
  if (optimize)
3183
    {
3184
      max_regno = max_reg_num ();
3185
      ira_reg_equiv_len = max_regno;
3186
      ira_reg_equiv_invariant_p
3187
        = (bool *) ira_allocate (max_regno * sizeof (bool));
3188
      memset (ira_reg_equiv_invariant_p, 0, max_regno * sizeof (bool));
3189
      ira_reg_equiv_const = (rtx *) ira_allocate (max_regno * sizeof (rtx));
3190
      memset (ira_reg_equiv_const, 0, max_regno * sizeof (rtx));
3191
      find_reg_equiv_invariant_const ();
3192
      if (rebuild_p)
3193
        {
3194
          timevar_push (TV_JUMP);
3195
          rebuild_jump_labels (get_insns ());
3196
          purge_all_dead_edges ();
3197
          timevar_pop (TV_JUMP);
3198
        }
3199
    }
3200
 
3201
  max_regno_before_ira = allocated_reg_info_size = max_reg_num ();
3202
  ira_setup_eliminable_regset ();
3203
 
3204
  ira_overall_cost = ira_reg_cost = ira_mem_cost = 0;
3205
  ira_load_cost = ira_store_cost = ira_shuffle_cost = 0;
3206
  ira_move_loops_num = ira_additional_jumps_num = 0;
3207
 
3208
  ira_assert (current_loops == NULL);
3209
  flow_loops_find (&ira_loops);
3210
  record_loop_exits ();
3211
  current_loops = &ira_loops;
3212
 
3213
  if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
3214
    fprintf (ira_dump_file, "Building IRA IR\n");
3215
  loops_p = ira_build (optimize
3216
                       && (flag_ira_region == IRA_REGION_ALL
3217
                           || flag_ira_region == IRA_REGION_MIXED));
3218
 
3219
  ira_assert (ira_conflicts_p || !loops_p);
3220
 
3221
  saved_flag_ira_share_spill_slots = flag_ira_share_spill_slots;
3222
  if (too_high_register_pressure_p ())
3223
    /* It is just wasting compiler's time to pack spilled pseudos into
3224
       stack slots in this case -- prohibit it.  */
3225
    flag_ira_share_spill_slots = FALSE;
3226
 
3227
  ira_color ();
3228
 
3229
  ira_max_point_before_emit = ira_max_point;
3230
 
3231
  ira_emit (loops_p);
3232
 
3233
  if (ira_conflicts_p)
3234
    {
3235
      max_regno = max_reg_num ();
3236
 
3237
      if (! loops_p)
3238
        ira_initiate_assign ();
3239
      else
3240
        {
3241
          expand_reg_info (allocated_reg_info_size);
3242
          setup_preferred_alternate_classes_for_new_pseudos
3243
            (allocated_reg_info_size);
3244
          allocated_reg_info_size = max_regno;
3245
 
3246
          if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
3247
            fprintf (ira_dump_file, "Flattening IR\n");
3248
          ira_flattening (max_regno_before_ira, ira_max_point_before_emit);
3249
          /* New insns were generated: add notes and recalculate live
3250
             info.  */
3251
          df_analyze ();
3252
 
3253
          flow_loops_find (&ira_loops);
3254
          record_loop_exits ();
3255
          current_loops = &ira_loops;
3256
 
3257
          setup_allocno_assignment_flags ();
3258
          ira_initiate_assign ();
3259
          ira_reassign_conflict_allocnos (max_regno);
3260
        }
3261
    }
3262
 
3263
  setup_reg_renumber ();
3264
 
3265
  calculate_allocation_cost ();
3266
 
3267
#ifdef ENABLE_IRA_CHECKING
3268
  if (ira_conflicts_p)
3269
    check_allocation ();
3270
#endif
3271
 
3272
  delete_trivially_dead_insns (get_insns (), max_reg_num ());
3273
  max_regno = max_reg_num ();
3274
 
3275
  /* And the reg_equiv_memory_loc array.  */
3276
  VEC_safe_grow (rtx, gc, reg_equiv_memory_loc_vec, max_regno);
3277
  memset (VEC_address (rtx, reg_equiv_memory_loc_vec), 0,
3278
          sizeof (rtx) * max_regno);
3279
  reg_equiv_memory_loc = VEC_address (rtx, reg_equiv_memory_loc_vec);
3280
 
3281
  if (max_regno != max_regno_before_ira)
3282
    {
3283
      regstat_free_n_sets_and_refs ();
3284
      regstat_free_ri ();
3285
      regstat_init_n_sets_and_refs ();
3286
      regstat_compute_ri ();
3287
    }
3288
 
3289
  allocate_initial_values (reg_equiv_memory_loc);
3290
 
3291
  overall_cost_before = ira_overall_cost;
3292
  if (ira_conflicts_p)
3293
    {
3294
      fix_reg_equiv_init ();
3295
 
3296
#ifdef ENABLE_IRA_CHECKING
3297
      print_redundant_copies ();
3298
#endif
3299
 
3300
      ira_spilled_reg_stack_slots_num = 0;
3301
      ira_spilled_reg_stack_slots
3302
        = ((struct ira_spilled_reg_stack_slot *)
3303
           ira_allocate (max_regno
3304
                         * sizeof (struct ira_spilled_reg_stack_slot)));
3305
      memset (ira_spilled_reg_stack_slots, 0,
3306
              max_regno * sizeof (struct ira_spilled_reg_stack_slot));
3307
    }
3308
 
3309
  timevar_pop (TV_IRA);
3310
 
3311
  timevar_push (TV_RELOAD);
3312
  df_set_flags (DF_NO_INSN_RESCAN);
3313
  build_insn_chain ();
3314
 
3315
  reload_completed = !reload (get_insns (), ira_conflicts_p);
3316
 
3317
  finish_subregs_of_mode ();
3318
 
3319
  timevar_pop (TV_RELOAD);
3320
 
3321
  timevar_push (TV_IRA);
3322
 
3323
  if (ira_conflicts_p)
3324
    {
3325
      ira_free (ira_spilled_reg_stack_slots);
3326
 
3327
      ira_finish_assign ();
3328
 
3329
    }
3330
  if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL
3331
      && overall_cost_before != ira_overall_cost)
3332
    fprintf (ira_dump_file, "+++Overall after reload %d\n", ira_overall_cost);
3333
  ira_destroy ();
3334
 
3335
  flag_ira_share_spill_slots = saved_flag_ira_share_spill_slots;
3336
 
3337
  flow_loops_free (&ira_loops);
3338
  free_dominance_info (CDI_DOMINATORS);
3339
  FOR_ALL_BB (bb)
3340
    bb->loop_father = NULL;
3341
  current_loops = NULL;
3342
 
3343
  regstat_free_ri ();
3344
  regstat_free_n_sets_and_refs ();
3345
 
3346
  if (optimize)
3347
    {
3348
      cleanup_cfg (CLEANUP_EXPENSIVE);
3349
 
3350
      ira_free (ira_reg_equiv_invariant_p);
3351
      ira_free (ira_reg_equiv_const);
3352
    }
3353
 
3354
  bitmap_obstack_release (&ira_bitmap_obstack);
3355
#ifndef IRA_NO_OBSTACK
3356
  obstack_free (&ira_obstack, NULL);
3357
#endif
3358
 
3359
  /* The code after the reload has changed so much that at this point
3360
     we might as well just rescan everything.  Not that
3361
     df_rescan_all_insns is not going to help here because it does not
3362
     touch the artificial uses and defs.  */
3363
  df_finish_pass (true);
3364
  if (optimize > 1)
3365
    df_live_add_problem ();
3366
  df_scan_alloc (NULL);
3367
  df_scan_blocks ();
3368
 
3369
  if (optimize)
3370
    df_analyze ();
3371
 
3372
  timevar_pop (TV_IRA);
3373
}
3374
 
3375
 
3376
 
3377
static bool
3378
gate_ira (void)
3379
{
3380
  return true;
3381
}
3382
 
3383
/* Run the integrated register allocator.  */
3384
static unsigned int
3385
rest_of_handle_ira (void)
3386
{
3387
  ira (dump_file);
3388
  return 0;
3389
}
3390
 
3391
struct rtl_opt_pass pass_ira =
3392
{
3393
 {
3394
  RTL_PASS,
3395
  "ira",                                /* name */
3396
  gate_ira,                             /* gate */
3397
  rest_of_handle_ira,                   /* execute */
3398
  NULL,                                 /* sub */
3399
  NULL,                                 /* next */
3400
  0,                                    /* static_pass_number */
3401
  TV_NONE,                              /* tv_id */
3402
  0,                                    /* properties_required */
3403
  0,                                    /* properties_provided */
3404
  0,                                    /* properties_destroyed */
3405
  0,                                    /* todo_flags_start */
3406
  TODO_dump_func |
3407
  TODO_ggc_collect                      /* todo_flags_finish */
3408
 }
3409
};

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.