OpenCores
URL https://opencores.org/ocsvn/openrisc/openrisc/trunk

Subversion Repositories openrisc

[/] [openrisc/] [trunk/] [gnu-stable/] [gcc-4.5.1/] [gcc/] [haifa-sched.c] - Blame information for rev 843

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 280 jeremybenn
/* Instruction scheduling pass.
2
   Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
3
   2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010
4
   Free Software Foundation, Inc.
5
   Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
6
   and currently maintained by, Jim Wilson (wilson@cygnus.com)
7
 
8
This file is part of GCC.
9
 
10
GCC is free software; you can redistribute it and/or modify it under
11
the terms of the GNU General Public License as published by the Free
12
Software Foundation; either version 3, or (at your option) any later
13
version.
14
 
15
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
16
WARRANTY; without even the implied warranty of MERCHANTABILITY or
17
FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
18
for more details.
19
 
20
You should have received a copy of the GNU General Public License
21
along with GCC; see the file COPYING3.  If not see
22
<http://www.gnu.org/licenses/>.  */
23
 
24
/* Instruction scheduling pass.  This file, along with sched-deps.c,
25
   contains the generic parts.  The actual entry point is found for
26
   the normal instruction scheduling pass is found in sched-rgn.c.
27
 
28
   We compute insn priorities based on data dependencies.  Flow
29
   analysis only creates a fraction of the data-dependencies we must
30
   observe: namely, only those dependencies which the combiner can be
31
   expected to use.  For this pass, we must therefore create the
32
   remaining dependencies we need to observe: register dependencies,
33
   memory dependencies, dependencies to keep function calls in order,
34
   and the dependence between a conditional branch and the setting of
35
   condition codes are all dealt with here.
36
 
37
   The scheduler first traverses the data flow graph, starting with
38
   the last instruction, and proceeding to the first, assigning values
39
   to insn_priority as it goes.  This sorts the instructions
40
   topologically by data dependence.
41
 
42
   Once priorities have been established, we order the insns using
43
   list scheduling.  This works as follows: starting with a list of
44
   all the ready insns, and sorted according to priority number, we
45
   schedule the insn from the end of the list by placing its
46
   predecessors in the list according to their priority order.  We
47
   consider this insn scheduled by setting the pointer to the "end" of
48
   the list to point to the previous insn.  When an insn has no
49
   predecessors, we either queue it until sufficient time has elapsed
50
   or add it to the ready list.  As the instructions are scheduled or
51
   when stalls are introduced, the queue advances and dumps insns into
52
   the ready list.  When all insns down to the lowest priority have
53
   been scheduled, the critical path of the basic block has been made
54
   as short as possible.  The remaining insns are then scheduled in
55
   remaining slots.
56
 
57
   The following list shows the order in which we want to break ties
58
   among insns in the ready list:
59
 
60
   1.  choose insn with the longest path to end of bb, ties
61
   broken by
62
   2.  choose insn with least contribution to register pressure,
63
   ties broken by
64
   3.  prefer in-block upon interblock motion, ties broken by
65
   4.  prefer useful upon speculative motion, ties broken by
66
   5.  choose insn with largest control flow probability, ties
67
   broken by
68
   6.  choose insn with the least dependences upon the previously
69
   scheduled insn, or finally
70
   7   choose the insn which has the most insns dependent on it.
71
   8.  choose insn with lowest UID.
72
 
73
   Memory references complicate matters.  Only if we can be certain
74
   that memory references are not part of the data dependency graph
75
   (via true, anti, or output dependence), can we move operations past
76
   memory references.  To first approximation, reads can be done
77
   independently, while writes introduce dependencies.  Better
78
   approximations will yield fewer dependencies.
79
 
80
   Before reload, an extended analysis of interblock data dependences
81
   is required for interblock scheduling.  This is performed in
82
   compute_block_backward_dependences ().
83
 
84
   Dependencies set up by memory references are treated in exactly the
85
   same way as other dependencies, by using insn backward dependences
86
   INSN_BACK_DEPS.  INSN_BACK_DEPS are translated into forward dependences
87
   INSN_FORW_DEPS the purpose of forward list scheduling.
88
 
89
   Having optimized the critical path, we may have also unduly
90
   extended the lifetimes of some registers.  If an operation requires
91
   that constants be loaded into registers, it is certainly desirable
92
   to load those constants as early as necessary, but no earlier.
93
   I.e., it will not do to load up a bunch of registers at the
94
   beginning of a basic block only to use them at the end, if they
95
   could be loaded later, since this may result in excessive register
96
   utilization.
97
 
98
   Note that since branches are never in basic blocks, but only end
99
   basic blocks, this pass will not move branches.  But that is ok,
100
   since we can use GNU's delayed branch scheduling pass to take care
101
   of this case.
102
 
103
   Also note that no further optimizations based on algebraic
104
   identities are performed, so this pass would be a good one to
105
   perform instruction splitting, such as breaking up a multiply
106
   instruction into shifts and adds where that is profitable.
107
 
108
   Given the memory aliasing analysis that this pass should perform,
109
   it should be possible to remove redundant stores to memory, and to
110
   load values from registers instead of hitting memory.
111
 
112
   Before reload, speculative insns are moved only if a 'proof' exists
113
   that no exception will be caused by this, and if no live registers
114
   exist that inhibit the motion (live registers constraints are not
115
   represented by data dependence edges).
116
 
117
   This pass must update information that subsequent passes expect to
118
   be correct.  Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119
   reg_n_calls_crossed, and reg_live_length.  Also, BB_HEAD, BB_END.
120
 
121
   The information in the line number notes is carefully retained by
122
   this pass.  Notes that refer to the starting and ending of
123
   exception regions are also carefully retained by this pass.  All
124
   other NOTE insns are grouped in their same relative order at the
125
   beginning of basic blocks and regions that have been scheduled.  */
126
 
127
#include "config.h"
128
#include "system.h"
129
#include "coretypes.h"
130
#include "tm.h"
131
#include "toplev.h"
132
#include "rtl.h"
133
#include "tm_p.h"
134
#include "hard-reg-set.h"
135
#include "regs.h"
136
#include "function.h"
137
#include "flags.h"
138
#include "insn-config.h"
139
#include "insn-attr.h"
140
#include "except.h"
141
#include "toplev.h"
142
#include "recog.h"
143
#include "sched-int.h"
144
#include "target.h"
145
#include "output.h"
146
#include "params.h"
147
#include "vecprim.h"
148
#include "dbgcnt.h"
149
#include "cfgloop.h"
150
#include "ira.h"
151
 
152
#ifdef INSN_SCHEDULING
153
 
154
/* issue_rate is the number of insns that can be scheduled in the same
155
   machine cycle.  It can be defined in the config/mach/mach.h file,
156
   otherwise we set it to 1.  */
157
 
158
int issue_rate;
159
 
160
/* sched-verbose controls the amount of debugging output the
161
   scheduler prints.  It is controlled by -fsched-verbose=N:
162
   N>0 and no -DSR : the output is directed to stderr.
163
   N>=10 will direct the printouts to stderr (regardless of -dSR).
164
   N=1: same as -dSR.
165
   N=2: bb's probabilities, detailed ready list info, unit/insn info.
166
   N=3: rtl at abort point, control-flow, regions info.
167
   N=5: dependences info.  */
168
 
169
static int sched_verbose_param = 0;
170
int sched_verbose = 0;
171
 
172
/* Debugging file.  All printouts are sent to dump, which is always set,
173
   either to stderr, or to the dump listing file (-dRS).  */
174
FILE *sched_dump = 0;
175
 
176
/* fix_sched_param() is called from toplev.c upon detection
177
   of the -fsched-verbose=N option.  */
178
 
179
void
180
fix_sched_param (const char *param, const char *val)
181
{
182
  if (!strcmp (param, "verbose"))
183
    sched_verbose_param = atoi (val);
184
  else
185
    warning (0, "fix_sched_param: unknown param: %s", param);
186
}
187
 
188
/* This is a placeholder for the scheduler parameters common
189
   to all schedulers.  */
190
struct common_sched_info_def *common_sched_info;
191
 
192
#define INSN_TICK(INSN) (HID (INSN)->tick)
193
#define INTER_TICK(INSN) (HID (INSN)->inter_tick)
194
 
195
/* If INSN_TICK of an instruction is equal to INVALID_TICK,
196
   then it should be recalculated from scratch.  */
197
#define INVALID_TICK (-(max_insn_queue_index + 1))
198
/* The minimal value of the INSN_TICK of an instruction.  */
199
#define MIN_TICK (-max_insn_queue_index)
200
 
201
/* Issue points are used to distinguish between instructions in max_issue ().
202
   For now, all instructions are equally good.  */
203
#define ISSUE_POINTS(INSN) 1
204
 
205
/* List of important notes we must keep around.  This is a pointer to the
206
   last element in the list.  */
207
rtx note_list;
208
 
209
static struct spec_info_def spec_info_var;
210
/* Description of the speculative part of the scheduling.
211
   If NULL - no speculation.  */
212
spec_info_t spec_info = NULL;
213
 
214
/* True, if recovery block was added during scheduling of current block.
215
   Used to determine, if we need to fix INSN_TICKs.  */
216
static bool haifa_recovery_bb_recently_added_p;
217
 
218
/* True, if recovery block was added during this scheduling pass.
219
   Used to determine if we should have empty memory pools of dependencies
220
   after finishing current region.  */
221
bool haifa_recovery_bb_ever_added_p;
222
 
223
/* Counters of different types of speculative instructions.  */
224
static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
225
 
226
/* Array used in {unlink, restore}_bb_notes.  */
227
static rtx *bb_header = 0;
228
 
229
/* Basic block after which recovery blocks will be created.  */
230
static basic_block before_recovery;
231
 
232
/* Basic block just before the EXIT_BLOCK and after recovery, if we have
233
   created it.  */
234
basic_block after_recovery;
235
 
236
/* FALSE if we add bb to another region, so we don't need to initialize it.  */
237
bool adding_bb_to_current_region_p = true;
238
 
239
/* Queues, etc.  */
240
 
241
/* An instruction is ready to be scheduled when all insns preceding it
242
   have already been scheduled.  It is important to ensure that all
243
   insns which use its result will not be executed until its result
244
   has been computed.  An insn is maintained in one of four structures:
245
 
246
   (P) the "Pending" set of insns which cannot be scheduled until
247
   their dependencies have been satisfied.
248
   (Q) the "Queued" set of insns that can be scheduled when sufficient
249
   time has passed.
250
   (R) the "Ready" list of unscheduled, uncommitted insns.
251
   (S) the "Scheduled" list of insns.
252
 
253
   Initially, all insns are either "Pending" or "Ready" depending on
254
   whether their dependencies are satisfied.
255
 
256
   Insns move from the "Ready" list to the "Scheduled" list as they
257
   are committed to the schedule.  As this occurs, the insns in the
258
   "Pending" list have their dependencies satisfied and move to either
259
   the "Ready" list or the "Queued" set depending on whether
260
   sufficient time has passed to make them ready.  As time passes,
261
   insns move from the "Queued" set to the "Ready" list.
262
 
263
   The "Pending" list (P) are the insns in the INSN_FORW_DEPS of the
264
   unscheduled insns, i.e., those that are ready, queued, and pending.
265
   The "Queued" set (Q) is implemented by the variable `insn_queue'.
266
   The "Ready" list (R) is implemented by the variables `ready' and
267
   `n_ready'.
268
   The "Scheduled" list (S) is the new insn chain built by this pass.
269
 
270
   The transition (R->S) is implemented in the scheduling loop in
271
   `schedule_block' when the best insn to schedule is chosen.
272
   The transitions (P->R and P->Q) are implemented in `schedule_insn' as
273
   insns move from the ready list to the scheduled list.
274
   The transition (Q->R) is implemented in 'queue_to_insn' as time
275
   passes or stalls are introduced.  */
276
 
277
/* Implement a circular buffer to delay instructions until sufficient
278
   time has passed.  For the new pipeline description interface,
279
   MAX_INSN_QUEUE_INDEX is a power of two minus one which is not less
280
   than maximal time of instruction execution computed by genattr.c on
281
   the base maximal time of functional unit reservations and getting a
282
   result.  This is the longest time an insn may be queued.  */
283
 
284
static rtx *insn_queue;
285
static int q_ptr = 0;
286
static int q_size = 0;
287
#define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
288
#define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
289
 
290
#define QUEUE_SCHEDULED (-3)
291
#define QUEUE_NOWHERE   (-2)
292
#define QUEUE_READY     (-1)
293
/* QUEUE_SCHEDULED - INSN is scheduled.
294
   QUEUE_NOWHERE   - INSN isn't scheduled yet and is neither in
295
   queue or ready list.
296
   QUEUE_READY     - INSN is in ready list.
297
   N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles.  */
298
 
299
#define QUEUE_INDEX(INSN) (HID (INSN)->queue_index)
300
 
301
/* The following variable value refers for all current and future
302
   reservations of the processor units.  */
303
state_t curr_state;
304
 
305
/* The following variable value is size of memory representing all
306
   current and future reservations of the processor units.  */
307
size_t dfa_state_size;
308
 
309
/* The following array is used to find the best insn from ready when
310
   the automaton pipeline interface is used.  */
311
char *ready_try = NULL;
312
 
313
/* The ready list.  */
314
struct ready_list ready = {NULL, 0, 0, 0, 0};
315
 
316
/* The pointer to the ready list (to be removed).  */
317
static struct ready_list *readyp = &ready;
318
 
319
/* Scheduling clock.  */
320
static int clock_var;
321
 
322
static int may_trap_exp (const_rtx, int);
323
 
324
/* Nonzero iff the address is comprised from at most 1 register.  */
325
#define CONST_BASED_ADDRESS_P(x)                        \
326
  (REG_P (x)                                    \
327
   || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS   \
328
        || (GET_CODE (x) == LO_SUM))                    \
329
       && (CONSTANT_P (XEXP (x, 0))                      \
330
           || CONSTANT_P (XEXP (x, 1)))))
331
 
332
/* Returns a class that insn with GET_DEST(insn)=x may belong to,
333
   as found by analyzing insn's expression.  */
334
 
335
 
336
static int haifa_luid_for_non_insn (rtx x);
337
 
338
/* Haifa version of sched_info hooks common to all headers.  */
339
const struct common_sched_info_def haifa_common_sched_info =
340
  {
341
    NULL, /* fix_recovery_cfg */
342
    NULL, /* add_block */
343
    NULL, /* estimate_number_of_insns */
344
    haifa_luid_for_non_insn, /* luid_for_non_insn */
345
    SCHED_PASS_UNKNOWN /* sched_pass_id */
346
  };
347
 
348
const struct sched_scan_info_def *sched_scan_info;
349
 
350
/* Mapping from instruction UID to its Logical UID.  */
351
VEC (int, heap) *sched_luids = NULL;
352
 
353
/* Next LUID to assign to an instruction.  */
354
int sched_max_luid = 1;
355
 
356
/* Haifa Instruction Data.  */
357
VEC (haifa_insn_data_def, heap) *h_i_d = NULL;
358
 
359
void (* sched_init_only_bb) (basic_block, basic_block);
360
 
361
/* Split block function.  Different schedulers might use different functions
362
   to handle their internal data consistent.  */
363
basic_block (* sched_split_block) (basic_block, rtx);
364
 
365
/* Create empty basic block after the specified block.  */
366
basic_block (* sched_create_empty_bb) (basic_block);
367
 
368
static int
369
may_trap_exp (const_rtx x, int is_store)
370
{
371
  enum rtx_code code;
372
 
373
  if (x == 0)
374
    return TRAP_FREE;
375
  code = GET_CODE (x);
376
  if (is_store)
377
    {
378
      if (code == MEM && may_trap_p (x))
379
        return TRAP_RISKY;
380
      else
381
        return TRAP_FREE;
382
    }
383
  if (code == MEM)
384
    {
385
      /* The insn uses memory:  a volatile load.  */
386
      if (MEM_VOLATILE_P (x))
387
        return IRISKY;
388
      /* An exception-free load.  */
389
      if (!may_trap_p (x))
390
        return IFREE;
391
      /* A load with 1 base register, to be further checked.  */
392
      if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
393
        return PFREE_CANDIDATE;
394
      /* No info on the load, to be further checked.  */
395
      return PRISKY_CANDIDATE;
396
    }
397
  else
398
    {
399
      const char *fmt;
400
      int i, insn_class = TRAP_FREE;
401
 
402
      /* Neither store nor load, check if it may cause a trap.  */
403
      if (may_trap_p (x))
404
        return TRAP_RISKY;
405
      /* Recursive step: walk the insn...  */
406
      fmt = GET_RTX_FORMAT (code);
407
      for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
408
        {
409
          if (fmt[i] == 'e')
410
            {
411
              int tmp_class = may_trap_exp (XEXP (x, i), is_store);
412
              insn_class = WORST_CLASS (insn_class, tmp_class);
413
            }
414
          else if (fmt[i] == 'E')
415
            {
416
              int j;
417
              for (j = 0; j < XVECLEN (x, i); j++)
418
                {
419
                  int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
420
                  insn_class = WORST_CLASS (insn_class, tmp_class);
421
                  if (insn_class == TRAP_RISKY || insn_class == IRISKY)
422
                    break;
423
                }
424
            }
425
          if (insn_class == TRAP_RISKY || insn_class == IRISKY)
426
            break;
427
        }
428
      return insn_class;
429
    }
430
}
431
 
432
/* Classifies rtx X of an insn for the purpose of verifying that X can be
433
   executed speculatively (and consequently the insn can be moved
434
   speculatively), by examining X, returning:
435
   TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
436
   TRAP_FREE: non-load insn.
437
   IFREE: load from a globally safe location.
438
   IRISKY: volatile load.
439
   PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
440
   being either PFREE or PRISKY.  */
441
 
442
static int
443
haifa_classify_rtx (const_rtx x)
444
{
445
  int tmp_class = TRAP_FREE;
446
  int insn_class = TRAP_FREE;
447
  enum rtx_code code;
448
 
449
  if (GET_CODE (x) == PARALLEL)
450
    {
451
      int i, len = XVECLEN (x, 0);
452
 
453
      for (i = len - 1; i >= 0; i--)
454
        {
455
          tmp_class = haifa_classify_rtx (XVECEXP (x, 0, i));
456
          insn_class = WORST_CLASS (insn_class, tmp_class);
457
          if (insn_class == TRAP_RISKY || insn_class == IRISKY)
458
            break;
459
        }
460
    }
461
  else
462
    {
463
      code = GET_CODE (x);
464
      switch (code)
465
        {
466
        case CLOBBER:
467
          /* Test if it is a 'store'.  */
468
          tmp_class = may_trap_exp (XEXP (x, 0), 1);
469
          break;
470
        case SET:
471
          /* Test if it is a store.  */
472
          tmp_class = may_trap_exp (SET_DEST (x), 1);
473
          if (tmp_class == TRAP_RISKY)
474
            break;
475
          /* Test if it is a load.  */
476
          tmp_class =
477
            WORST_CLASS (tmp_class,
478
                         may_trap_exp (SET_SRC (x), 0));
479
          break;
480
        case COND_EXEC:
481
          tmp_class = haifa_classify_rtx (COND_EXEC_CODE (x));
482
          if (tmp_class == TRAP_RISKY)
483
            break;
484
          tmp_class = WORST_CLASS (tmp_class,
485
                                   may_trap_exp (COND_EXEC_TEST (x), 0));
486
          break;
487
        case TRAP_IF:
488
          tmp_class = TRAP_RISKY;
489
          break;
490
        default:;
491
        }
492
      insn_class = tmp_class;
493
    }
494
 
495
  return insn_class;
496
}
497
 
498
int
499
haifa_classify_insn (const_rtx insn)
500
{
501
  return haifa_classify_rtx (PATTERN (insn));
502
}
503
 
504
/* Forward declarations.  */
505
 
506
static int priority (rtx);
507
static int rank_for_schedule (const void *, const void *);
508
static void swap_sort (rtx *, int);
509
static void queue_insn (rtx, int);
510
static int schedule_insn (rtx);
511
static void adjust_priority (rtx);
512
static void advance_one_cycle (void);
513
static void extend_h_i_d (void);
514
 
515
 
516
/* Notes handling mechanism:
517
   =========================
518
   Generally, NOTES are saved before scheduling and restored after scheduling.
519
   The scheduler distinguishes between two types of notes:
520
 
521
   (1) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
522
   Before scheduling a region, a pointer to the note is added to the insn
523
   that follows or precedes it.  (This happens as part of the data dependence
524
   computation).  After scheduling an insn, the pointer contained in it is
525
   used for regenerating the corresponding note (in reemit_notes).
526
 
527
   (2) All other notes (e.g. INSN_DELETED):  Before scheduling a block,
528
   these notes are put in a list (in rm_other_notes() and
529
   unlink_other_notes ()).  After scheduling the block, these notes are
530
   inserted at the beginning of the block (in schedule_block()).  */
531
 
532
static void ready_add (struct ready_list *, rtx, bool);
533
static rtx ready_remove_first (struct ready_list *);
534
 
535
static void queue_to_ready (struct ready_list *);
536
static int early_queue_to_ready (state_t, struct ready_list *);
537
 
538
static void debug_ready_list (struct ready_list *);
539
 
540
/* The following functions are used to implement multi-pass scheduling
541
   on the first cycle.  */
542
static rtx ready_remove (struct ready_list *, int);
543
static void ready_remove_insn (rtx);
544
 
545
static int choose_ready (struct ready_list *, rtx *);
546
 
547
static void fix_inter_tick (rtx, rtx);
548
static int fix_tick_ready (rtx);
549
static void change_queue_index (rtx, int);
550
 
551
/* The following functions are used to implement scheduling of data/control
552
   speculative instructions.  */
553
 
554
static void extend_h_i_d (void);
555
static void init_h_i_d (rtx);
556
static void generate_recovery_code (rtx);
557
static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
558
static void begin_speculative_block (rtx);
559
static void add_to_speculative_block (rtx);
560
static void init_before_recovery (basic_block *);
561
static void create_check_block_twin (rtx, bool);
562
static void fix_recovery_deps (basic_block);
563
static void haifa_change_pattern (rtx, rtx);
564
static void dump_new_block_header (int, basic_block, rtx, rtx);
565
static void restore_bb_notes (basic_block);
566
static void fix_jump_move (rtx);
567
static void move_block_after_check (rtx);
568
static void move_succs (VEC(edge,gc) **, basic_block);
569
static void sched_remove_insn (rtx);
570
static void clear_priorities (rtx, rtx_vec_t *);
571
static void calc_priorities (rtx_vec_t);
572
static void add_jump_dependencies (rtx, rtx);
573
#ifdef ENABLE_CHECKING
574
static int has_edge_p (VEC(edge,gc) *, int);
575
static void check_cfg (rtx, rtx);
576
#endif
577
 
578
#endif /* INSN_SCHEDULING */
579
 
580
/* Point to state used for the current scheduling pass.  */
581
struct haifa_sched_info *current_sched_info;
582
 
583
#ifndef INSN_SCHEDULING
584
void
585
schedule_insns (void)
586
{
587
}
588
#else
589
 
590
/* Do register pressure sensitive insn scheduling if the flag is set
591
   up.  */
592
bool sched_pressure_p;
593
 
594
/* Map regno -> its cover class.  The map defined only when
595
   SCHED_PRESSURE_P is true.  */
596
enum reg_class *sched_regno_cover_class;
597
 
598
/* The current register pressure.  Only elements corresponding cover
599
   classes are defined.  */
600
static int curr_reg_pressure[N_REG_CLASSES];
601
 
602
/* Saved value of the previous array.  */
603
static int saved_reg_pressure[N_REG_CLASSES];
604
 
605
/* Register living at given scheduling point.  */
606
static bitmap curr_reg_live;
607
 
608
/* Saved value of the previous array.  */
609
static bitmap saved_reg_live;
610
 
611
/* Registers mentioned in the current region.  */
612
static bitmap region_ref_regs;
613
 
614
/* Initiate register pressure relative info for scheduling the current
615
   region.  Currently it is only clearing register mentioned in the
616
   current region.  */
617
void
618
sched_init_region_reg_pressure_info (void)
619
{
620
  bitmap_clear (region_ref_regs);
621
}
622
 
623
/* Update current register pressure related info after birth (if
624
   BIRTH_P) or death of register REGNO.  */
625
static void
626
mark_regno_birth_or_death (int regno, bool birth_p)
627
{
628
  enum reg_class cover_class;
629
 
630
  cover_class = sched_regno_cover_class[regno];
631
  if (regno >= FIRST_PSEUDO_REGISTER)
632
    {
633
      if (cover_class != NO_REGS)
634
        {
635
          if (birth_p)
636
            {
637
              bitmap_set_bit (curr_reg_live, regno);
638
              curr_reg_pressure[cover_class]
639
                += ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
640
            }
641
          else
642
            {
643
              bitmap_clear_bit (curr_reg_live, regno);
644
              curr_reg_pressure[cover_class]
645
                -= ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
646
            }
647
        }
648
    }
649
  else if (cover_class != NO_REGS
650
           && ! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
651
    {
652
      if (birth_p)
653
        {
654
          bitmap_set_bit (curr_reg_live, regno);
655
          curr_reg_pressure[cover_class]++;
656
        }
657
      else
658
        {
659
          bitmap_clear_bit (curr_reg_live, regno);
660
          curr_reg_pressure[cover_class]--;
661
        }
662
    }
663
}
664
 
665
/* Initiate current register pressure related info from living
666
   registers given by LIVE.  */
667
static void
668
initiate_reg_pressure_info (bitmap live)
669
{
670
  int i;
671
  unsigned int j;
672
  bitmap_iterator bi;
673
 
674
  for (i = 0; i < ira_reg_class_cover_size; i++)
675
    curr_reg_pressure[ira_reg_class_cover[i]] = 0;
676
  bitmap_clear (curr_reg_live);
677
  EXECUTE_IF_SET_IN_BITMAP (live, 0, j, bi)
678
    if (current_nr_blocks == 1 || bitmap_bit_p (region_ref_regs, j))
679
      mark_regno_birth_or_death (j, true);
680
}
681
 
682
/* Mark registers in X as mentioned in the current region.  */
683
static void
684
setup_ref_regs (rtx x)
685
{
686
  int i, j, regno;
687
  const RTX_CODE code = GET_CODE (x);
688
  const char *fmt;
689
 
690
  if (REG_P (x))
691
    {
692
      regno = REGNO (x);
693
      if (regno >= FIRST_PSEUDO_REGISTER)
694
        bitmap_set_bit (region_ref_regs, REGNO (x));
695
      else
696
        for (i = hard_regno_nregs[regno][GET_MODE (x)] - 1; i >= 0; i--)
697
          bitmap_set_bit (region_ref_regs, regno + i);
698
      return;
699
    }
700
  fmt = GET_RTX_FORMAT (code);
701
  for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
702
    if (fmt[i] == 'e')
703
      setup_ref_regs (XEXP (x, i));
704
    else if (fmt[i] == 'E')
705
      {
706
        for (j = 0; j < XVECLEN (x, i); j++)
707
          setup_ref_regs (XVECEXP (x, i, j));
708
      }
709
}
710
 
711
/* Initiate current register pressure related info at the start of
712
   basic block BB.  */
713
static void
714
initiate_bb_reg_pressure_info (basic_block bb)
715
{
716
  /* JPB 1-Sep-10: Fixed warning. */
717
#ifdef EH_RETURN_DATA_REGNO
718
  unsigned int i;
719
#endif
720
  rtx insn;
721
 
722
  if (current_nr_blocks > 1)
723
    FOR_BB_INSNS (bb, insn)
724
      if (NONDEBUG_INSN_P (insn))
725
        setup_ref_regs (PATTERN (insn));
726
  initiate_reg_pressure_info (df_get_live_in (bb));
727
#ifdef EH_RETURN_DATA_REGNO
728
  if (bb_has_eh_pred (bb))
729
    for (i = 0; ; ++i)
730
      {
731
        unsigned int regno = EH_RETURN_DATA_REGNO (i);
732
 
733
        if (regno == INVALID_REGNUM)
734
          break;
735
        if (! bitmap_bit_p (df_get_live_in (bb), regno))
736
          mark_regno_birth_or_death (regno, true);
737
      }
738
#endif
739
}
740
 
741
/* Save current register pressure related info.  */
742
static void
743
save_reg_pressure (void)
744
{
745
  int i;
746
 
747
  for (i = 0; i < ira_reg_class_cover_size; i++)
748
    saved_reg_pressure[ira_reg_class_cover[i]]
749
      = curr_reg_pressure[ira_reg_class_cover[i]];
750
  bitmap_copy (saved_reg_live, curr_reg_live);
751
}
752
 
753
/* Restore saved register pressure related info.  */
754
static void
755
restore_reg_pressure (void)
756
{
757
  int i;
758
 
759
  for (i = 0; i < ira_reg_class_cover_size; i++)
760
    curr_reg_pressure[ira_reg_class_cover[i]]
761
      = saved_reg_pressure[ira_reg_class_cover[i]];
762
  bitmap_copy (curr_reg_live, saved_reg_live);
763
}
764
 
765
/* Return TRUE if the register is dying after its USE.  */
766
static bool
767
dying_use_p (struct reg_use_data *use)
768
{
769
  struct reg_use_data *next;
770
 
771
  for (next = use->next_regno_use; next != use; next = next->next_regno_use)
772
    if (NONDEBUG_INSN_P (next->insn)
773
        && QUEUE_INDEX (next->insn) != QUEUE_SCHEDULED)
774
      return false;
775
  return true;
776
}
777
 
778
/* Print info about the current register pressure and its excess for
779
   each cover class.  */
780
static void
781
print_curr_reg_pressure (void)
782
{
783
  int i;
784
  enum reg_class cl;
785
 
786
  fprintf (sched_dump, ";;\t");
787
  for (i = 0; i < ira_reg_class_cover_size; i++)
788
    {
789
      cl = ira_reg_class_cover[i];
790
      gcc_assert (curr_reg_pressure[cl] >= 0);
791
      fprintf (sched_dump, "  %s:%d(%d)", reg_class_names[cl],
792
               curr_reg_pressure[cl],
793
               curr_reg_pressure[cl] - ira_available_class_regs[cl]);
794
    }
795
  fprintf (sched_dump, "\n");
796
}
797
 
798
/* Pointer to the last instruction scheduled.  Used by rank_for_schedule,
799
   so that insns independent of the last scheduled insn will be preferred
800
   over dependent instructions.  */
801
 
802
static rtx last_scheduled_insn;
803
 
804
/* Cached cost of the instruction.  Use below function to get cost of the
805
   insn.  -1 here means that the field is not initialized.  */
806
#define INSN_COST(INSN) (HID (INSN)->cost)
807
 
808
/* Compute cost of executing INSN.
809
   This is the number of cycles between instruction issue and
810
   instruction results.  */
811
int
812
insn_cost (rtx insn)
813
{
814
  int cost;
815
 
816
  if (sel_sched_p ())
817
    {
818
      if (recog_memoized (insn) < 0)
819
        return 0;
820
 
821
      cost = insn_default_latency (insn);
822
      if (cost < 0)
823
        cost = 0;
824
 
825
      return cost;
826
    }
827
 
828
  cost = INSN_COST (insn);
829
 
830
  if (cost < 0)
831
    {
832
      /* A USE insn, or something else we don't need to
833
         understand.  We can't pass these directly to
834
         result_ready_cost or insn_default_latency because it will
835
         trigger a fatal error for unrecognizable insns.  */
836
      if (recog_memoized (insn) < 0)
837
        {
838
          INSN_COST (insn) = 0;
839
          return 0;
840
        }
841
      else
842
        {
843
          cost = insn_default_latency (insn);
844
          if (cost < 0)
845
            cost = 0;
846
 
847
          INSN_COST (insn) = cost;
848
        }
849
    }
850
 
851
  return cost;
852
}
853
 
854
/* Compute cost of dependence LINK.
855
   This is the number of cycles between instruction issue and
856
   instruction results.
857
   ??? We also use this function to call recog_memoized on all insns.  */
858
int
859
dep_cost_1 (dep_t link, dw_t dw)
860
{
861
  rtx insn = DEP_PRO (link);
862
  rtx used = DEP_CON (link);
863
  int cost;
864
 
865
  /* A USE insn should never require the value used to be computed.
866
     This allows the computation of a function's result and parameter
867
     values to overlap the return and call.  We don't care about the
868
     the dependence cost when only decreasing register pressure.  */
869
  if (recog_memoized (used) < 0)
870
    {
871
      cost = 0;
872
      recog_memoized (insn);
873
    }
874
  else
875
    {
876
      enum reg_note dep_type = DEP_TYPE (link);
877
 
878
      cost = insn_cost (insn);
879
 
880
      if (INSN_CODE (insn) >= 0)
881
        {
882
          if (dep_type == REG_DEP_ANTI)
883
            cost = 0;
884
          else if (dep_type == REG_DEP_OUTPUT)
885
            {
886
              cost = (insn_default_latency (insn)
887
                      - insn_default_latency (used));
888
              if (cost <= 0)
889
                cost = 1;
890
            }
891
          else if (bypass_p (insn))
892
            cost = insn_latency (insn, used);
893
        }
894
 
895
 
896
      if (targetm.sched.adjust_cost_2)
897
        cost = targetm.sched.adjust_cost_2 (used, (int) dep_type, insn, cost,
898
                                            dw);
899
      else if (targetm.sched.adjust_cost != NULL)
900
        {
901
          /* This variable is used for backward compatibility with the
902
             targets.  */
903
          rtx dep_cost_rtx_link = alloc_INSN_LIST (NULL_RTX, NULL_RTX);
904
 
905
          /* Make it self-cycled, so that if some tries to walk over this
906
             incomplete list he/she will be caught in an endless loop.  */
907
          XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
908
 
909
          /* Targets use only REG_NOTE_KIND of the link.  */
910
          PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
911
 
912
          cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
913
                                            insn, cost);
914
 
915
          free_INSN_LIST_node (dep_cost_rtx_link);
916
        }
917
 
918
      if (cost < 0)
919
        cost = 0;
920
    }
921
 
922
  return cost;
923
}
924
 
925
/* Compute cost of dependence LINK.
926
   This is the number of cycles between instruction issue and
927
   instruction results.  */
928
int
929
dep_cost (dep_t link)
930
{
931
  return dep_cost_1 (link, 0);
932
}
933
 
934
/* Use this sel-sched.c friendly function in reorder2 instead of increasing
935
   INSN_PRIORITY explicitly.  */
936
void
937
increase_insn_priority (rtx insn, int amount)
938
{
939
  if (!sel_sched_p ())
940
    {
941
      /* We're dealing with haifa-sched.c INSN_PRIORITY.  */
942
      if (INSN_PRIORITY_KNOWN (insn))
943
          INSN_PRIORITY (insn) += amount;
944
    }
945
  else
946
    {
947
      /* In sel-sched.c INSN_PRIORITY is not kept up to date.
948
         Use EXPR_PRIORITY instead. */
949
      sel_add_to_insn_priority (insn, amount);
950
    }
951
}
952
 
953
/* Return 'true' if DEP should be included in priority calculations.  */
954
static bool
955
contributes_to_priority_p (dep_t dep)
956
{
957
  if (DEBUG_INSN_P (DEP_CON (dep))
958
      || DEBUG_INSN_P (DEP_PRO (dep)))
959
    return false;
960
 
961
  /* Critical path is meaningful in block boundaries only.  */
962
  if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
963
                                                    DEP_PRO (dep)))
964
    return false;
965
 
966
  /* If flag COUNT_SPEC_IN_CRITICAL_PATH is set,
967
     then speculative instructions will less likely be
968
     scheduled.  That is because the priority of
969
     their producers will increase, and, thus, the
970
     producers will more likely be scheduled, thus,
971
     resolving the dependence.  */
972
  if (sched_deps_info->generate_spec_deps
973
      && !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
974
      && (DEP_STATUS (dep) & SPECULATIVE))
975
    return false;
976
 
977
  return true;
978
}
979
 
980
/* Compute the number of nondebug forward deps of an insn.  */
981
 
982
static int
983
dep_list_size (rtx insn)
984
{
985
  sd_iterator_def sd_it;
986
  dep_t dep;
987
  int dbgcount = 0, nodbgcount = 0;
988
 
989
  if (!MAY_HAVE_DEBUG_INSNS)
990
    return sd_lists_size (insn, SD_LIST_FORW);
991
 
992
  FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
993
    {
994
      if (DEBUG_INSN_P (DEP_CON (dep)))
995
        dbgcount++;
996
      else if (!DEBUG_INSN_P (DEP_PRO (dep)))
997
        nodbgcount++;
998
    }
999
 
1000
  gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, SD_LIST_FORW));
1001
 
1002
  return nodbgcount;
1003
}
1004
 
1005
/* Compute the priority number for INSN.  */
1006
static int
1007
priority (rtx insn)
1008
{
1009
  if (! INSN_P (insn))
1010
    return 0;
1011
 
1012
  /* We should not be interested in priority of an already scheduled insn.  */
1013
  gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
1014
 
1015
  if (!INSN_PRIORITY_KNOWN (insn))
1016
    {
1017
      int this_priority = -1;
1018
 
1019
      if (dep_list_size (insn) == 0)
1020
        /* ??? We should set INSN_PRIORITY to insn_cost when and insn has
1021
           some forward deps but all of them are ignored by
1022
           contributes_to_priority hook.  At the moment we set priority of
1023
           such insn to 0.  */
1024
        this_priority = insn_cost (insn);
1025
      else
1026
        {
1027
          rtx prev_first, twin;
1028
          basic_block rec;
1029
 
1030
          /* For recovery check instructions we calculate priority slightly
1031
             different than that of normal instructions.  Instead of walking
1032
             through INSN_FORW_DEPS (check) list, we walk through
1033
             INSN_FORW_DEPS list of each instruction in the corresponding
1034
             recovery block.  */
1035
 
1036
          /* Selective scheduling does not define RECOVERY_BLOCK macro.  */
1037
          rec = sel_sched_p () ? NULL : RECOVERY_BLOCK (insn);
1038
          if (!rec || rec == EXIT_BLOCK_PTR)
1039
            {
1040
              prev_first = PREV_INSN (insn);
1041
              twin = insn;
1042
            }
1043
          else
1044
            {
1045
              prev_first = NEXT_INSN (BB_HEAD (rec));
1046
              twin = PREV_INSN (BB_END (rec));
1047
            }
1048
 
1049
          do
1050
            {
1051
              sd_iterator_def sd_it;
1052
              dep_t dep;
1053
 
1054
              FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
1055
                {
1056
                  rtx next;
1057
                  int next_priority;
1058
 
1059
                  next = DEP_CON (dep);
1060
 
1061
                  if (BLOCK_FOR_INSN (next) != rec)
1062
                    {
1063
                      int cost;
1064
 
1065
                      if (!contributes_to_priority_p (dep))
1066
                        continue;
1067
 
1068
                      if (twin == insn)
1069
                        cost = dep_cost (dep);
1070
                      else
1071
                        {
1072
                          struct _dep _dep1, *dep1 = &_dep1;
1073
 
1074
                          init_dep (dep1, insn, next, REG_DEP_ANTI);
1075
 
1076
                          cost = dep_cost (dep1);
1077
                        }
1078
 
1079
                      next_priority = cost + priority (next);
1080
 
1081
                      if (next_priority > this_priority)
1082
                        this_priority = next_priority;
1083
                    }
1084
                }
1085
 
1086
              twin = PREV_INSN (twin);
1087
            }
1088
          while (twin != prev_first);
1089
        }
1090
 
1091
      if (this_priority < 0)
1092
        {
1093
          gcc_assert (this_priority == -1);
1094
 
1095
          this_priority = insn_cost (insn);
1096
        }
1097
 
1098
      INSN_PRIORITY (insn) = this_priority;
1099
      INSN_PRIORITY_STATUS (insn) = 1;
1100
    }
1101
 
1102
  return INSN_PRIORITY (insn);
1103
}
1104
 
1105
/* Macros and functions for keeping the priority queue sorted, and
1106
   dealing with queuing and dequeuing of instructions.  */
1107
 
1108
#define SCHED_SORT(READY, N_READY)                                   \
1109
do { if ((N_READY) == 2)                                             \
1110
       swap_sort (READY, N_READY);                                   \
1111
     else if ((N_READY) > 2)                                         \
1112
         qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); }  \
1113
while (0)
1114
 
1115
/* Setup info about the current register pressure impact of scheduling
1116
   INSN at the current scheduling point.  */
1117
static void
1118
setup_insn_reg_pressure_info (rtx insn)
1119
{
1120
  int i, change, before, after, hard_regno;
1121
  int excess_cost_change;
1122
  enum machine_mode mode;
1123
  enum reg_class cl;
1124
  struct reg_pressure_data *pressure_info;
1125
  int *max_reg_pressure;
1126
  struct reg_use_data *use;
1127
  static int death[N_REG_CLASSES];
1128
 
1129
  excess_cost_change = 0;
1130
  for (i = 0; i < ira_reg_class_cover_size; i++)
1131
    death[ira_reg_class_cover[i]] = 0;
1132
  for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1133
    if (dying_use_p (use))
1134
      {
1135
        cl = sched_regno_cover_class[use->regno];
1136
        if (use->regno < FIRST_PSEUDO_REGISTER)
1137
          death[cl]++;
1138
        else
1139
          death[cl] += ira_reg_class_nregs[cl][PSEUDO_REGNO_MODE (use->regno)];
1140
      }
1141
  pressure_info = INSN_REG_PRESSURE (insn);
1142
  max_reg_pressure = INSN_MAX_REG_PRESSURE (insn);
1143
  gcc_assert (pressure_info != NULL && max_reg_pressure != NULL);
1144
  for (i = 0; i < ira_reg_class_cover_size; i++)
1145
    {
1146
      cl = ira_reg_class_cover[i];
1147
      gcc_assert (curr_reg_pressure[cl] >= 0);
1148
      change = (int) pressure_info[i].set_increase - death[cl];
1149
      before = MAX (0, max_reg_pressure[i] - ira_available_class_regs[cl]);
1150
      after = MAX (0, max_reg_pressure[i] + change
1151
                   - ira_available_class_regs[cl]);
1152
      hard_regno = ira_class_hard_regs[cl][0];
1153
      gcc_assert (hard_regno >= 0);
1154
      mode = reg_raw_mode[hard_regno];
1155
      excess_cost_change += ((after - before)
1156
                             * (ira_memory_move_cost[mode][cl][0]
1157
                                + ira_memory_move_cost[mode][cl][1]));
1158
    }
1159
  INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insn) = excess_cost_change;
1160
}
1161
 
1162
/* Returns a positive value if x is preferred; returns a negative value if
1163
   y is preferred.  Should never return 0, since that will make the sort
1164
   unstable.  */
1165
 
1166
static int
1167
rank_for_schedule (const void *x, const void *y)
1168
{
1169
  rtx tmp = *(const rtx *) y;
1170
  rtx tmp2 = *(const rtx *) x;
1171
  rtx last;
1172
  int tmp_class, tmp2_class;
1173
  int val, priority_val, info_val;
1174
 
1175
  if (MAY_HAVE_DEBUG_INSNS)
1176
    {
1177
      /* Schedule debug insns as early as possible.  */
1178
      if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2))
1179
        return -1;
1180
      else if (DEBUG_INSN_P (tmp2))
1181
        return 1;
1182
    }
1183
 
1184
  /* The insn in a schedule group should be issued the first.  */
1185
  if (flag_sched_group_heuristic &&
1186
      SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
1187
    return SCHED_GROUP_P (tmp2) ? 1 : -1;
1188
 
1189
  /* Make sure that priority of TMP and TMP2 are initialized.  */
1190
  gcc_assert (INSN_PRIORITY_KNOWN (tmp) && INSN_PRIORITY_KNOWN (tmp2));
1191
 
1192
  if (sched_pressure_p)
1193
    {
1194
      int diff;
1195
 
1196
      /* Prefer insn whose scheduling results in the smallest register
1197
         pressure excess.  */
1198
      if ((diff = (INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp)
1199
                   + (INSN_TICK (tmp) > clock_var
1200
                      ? INSN_TICK (tmp) - clock_var : 0)
1201
                   - INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp2)
1202
                   - (INSN_TICK (tmp2) > clock_var
1203
                      ? INSN_TICK (tmp2) - clock_var : 0))) != 0)
1204
        return diff;
1205
    }
1206
 
1207
 
1208
  if (sched_pressure_p
1209
      && (INSN_TICK (tmp2) > clock_var || INSN_TICK (tmp) > clock_var))
1210
    {
1211
      if (INSN_TICK (tmp) <= clock_var)
1212
        return -1;
1213
      else if (INSN_TICK (tmp2) <= clock_var)
1214
        return 1;
1215
      else
1216
        return INSN_TICK (tmp) - INSN_TICK (tmp2);
1217
    }
1218
  /* Prefer insn with higher priority.  */
1219
  priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
1220
 
1221
  if (flag_sched_critical_path_heuristic && priority_val)
1222
    return priority_val;
1223
 
1224
  /* Prefer speculative insn with greater dependencies weakness.  */
1225
  if (flag_sched_spec_insn_heuristic && spec_info)
1226
    {
1227
      ds_t ds1, ds2;
1228
      dw_t dw1, dw2;
1229
      int dw;
1230
 
1231
      ds1 = TODO_SPEC (tmp) & SPECULATIVE;
1232
      if (ds1)
1233
        dw1 = ds_weak (ds1);
1234
      else
1235
        dw1 = NO_DEP_WEAK;
1236
 
1237
      ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
1238
      if (ds2)
1239
        dw2 = ds_weak (ds2);
1240
      else
1241
        dw2 = NO_DEP_WEAK;
1242
 
1243
      dw = dw2 - dw1;
1244
      if (dw > (NO_DEP_WEAK / 8) || dw < -(NO_DEP_WEAK / 8))
1245
        return dw;
1246
    }
1247
 
1248
  info_val = (*current_sched_info->rank) (tmp, tmp2);
1249
  if(flag_sched_rank_heuristic && info_val)
1250
    return info_val;
1251
 
1252
  if (flag_sched_last_insn_heuristic)
1253
    {
1254
      last = last_scheduled_insn;
1255
 
1256
      if (DEBUG_INSN_P (last) && last != current_sched_info->prev_head)
1257
        do
1258
          last = PREV_INSN (last);
1259
        while (!NONDEBUG_INSN_P (last)
1260
               && last != current_sched_info->prev_head);
1261
    }
1262
 
1263
  /* Compare insns based on their relation to the last scheduled
1264
     non-debug insn.  */
1265
  if (flag_sched_last_insn_heuristic && NONDEBUG_INSN_P (last))
1266
    {
1267
      dep_t dep1;
1268
      dep_t dep2;
1269
 
1270
      /* Classify the instructions into three classes:
1271
         1) Data dependent on last schedule insn.
1272
         2) Anti/Output dependent on last scheduled insn.
1273
         3) Independent of last scheduled insn, or has latency of one.
1274
         Choose the insn from the highest numbered class if different.  */
1275
      dep1 = sd_find_dep_between (last, tmp, true);
1276
 
1277
      if (dep1 == NULL || dep_cost (dep1) == 1)
1278
        tmp_class = 3;
1279
      else if (/* Data dependence.  */
1280
               DEP_TYPE (dep1) == REG_DEP_TRUE)
1281
        tmp_class = 1;
1282
      else
1283
        tmp_class = 2;
1284
 
1285
      dep2 = sd_find_dep_between (last, tmp2, true);
1286
 
1287
      if (dep2 == NULL || dep_cost (dep2)  == 1)
1288
        tmp2_class = 3;
1289
      else if (/* Data dependence.  */
1290
               DEP_TYPE (dep2) == REG_DEP_TRUE)
1291
        tmp2_class = 1;
1292
      else
1293
        tmp2_class = 2;
1294
 
1295
      if ((val = tmp2_class - tmp_class))
1296
        return val;
1297
    }
1298
 
1299
  /* Prefer the insn which has more later insns that depend on it.
1300
     This gives the scheduler more freedom when scheduling later
1301
     instructions at the expense of added register pressure.  */
1302
 
1303
  val = (dep_list_size (tmp2) - dep_list_size (tmp));
1304
 
1305
  if (flag_sched_dep_count_heuristic && val != 0)
1306
    return val;
1307
 
1308
  /* If insns are equally good, sort by INSN_LUID (original insn order),
1309
     so that we make the sort stable.  This minimizes instruction movement,
1310
     thus minimizing sched's effect on debugging and cross-jumping.  */
1311
  return INSN_LUID (tmp) - INSN_LUID (tmp2);
1312
}
1313
 
1314
/* Resort the array A in which only element at index N may be out of order.  */
1315
 
1316
HAIFA_INLINE static void
1317
swap_sort (rtx *a, int n)
1318
{
1319
  rtx insn = a[n - 1];
1320
  int i = n - 2;
1321
 
1322
  while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
1323
    {
1324
      a[i + 1] = a[i];
1325
      i -= 1;
1326
    }
1327
  a[i + 1] = insn;
1328
}
1329
 
1330
/* Add INSN to the insn queue so that it can be executed at least
1331
   N_CYCLES after the currently executing insn.  Preserve insns
1332
   chain for debugging purposes.  */
1333
 
1334
HAIFA_INLINE static void
1335
queue_insn (rtx insn, int n_cycles)
1336
{
1337
  int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
1338
  rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
1339
 
1340
  gcc_assert (n_cycles <= max_insn_queue_index);
1341
  gcc_assert (!DEBUG_INSN_P (insn));
1342
 
1343
  insn_queue[next_q] = link;
1344
  q_size += 1;
1345
 
1346
  if (sched_verbose >= 2)
1347
    {
1348
      fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
1349
               (*current_sched_info->print_insn) (insn, 0));
1350
 
1351
      fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
1352
    }
1353
 
1354
  QUEUE_INDEX (insn) = next_q;
1355
}
1356
 
1357
/* Remove INSN from queue.  */
1358
static void
1359
queue_remove (rtx insn)
1360
{
1361
  gcc_assert (QUEUE_INDEX (insn) >= 0);
1362
  remove_free_INSN_LIST_elem (insn, &insn_queue[QUEUE_INDEX (insn)]);
1363
  q_size--;
1364
  QUEUE_INDEX (insn) = QUEUE_NOWHERE;
1365
}
1366
 
1367
/* Return a pointer to the bottom of the ready list, i.e. the insn
1368
   with the lowest priority.  */
1369
 
1370
rtx *
1371
ready_lastpos (struct ready_list *ready)
1372
{
1373
  gcc_assert (ready->n_ready >= 1);
1374
  return ready->vec + ready->first - ready->n_ready + 1;
1375
}
1376
 
1377
/* Add an element INSN to the ready list so that it ends up with the
1378
   lowest/highest priority depending on FIRST_P.  */
1379
 
1380
HAIFA_INLINE static void
1381
ready_add (struct ready_list *ready, rtx insn, bool first_p)
1382
{
1383
  if (!first_p)
1384
    {
1385
      if (ready->first == ready->n_ready)
1386
        {
1387
          memmove (ready->vec + ready->veclen - ready->n_ready,
1388
                   ready_lastpos (ready),
1389
                   ready->n_ready * sizeof (rtx));
1390
          ready->first = ready->veclen - 1;
1391
        }
1392
      ready->vec[ready->first - ready->n_ready] = insn;
1393
    }
1394
  else
1395
    {
1396
      if (ready->first == ready->veclen - 1)
1397
        {
1398
          if (ready->n_ready)
1399
            /* ready_lastpos() fails when called with (ready->n_ready == 0).  */
1400
            memmove (ready->vec + ready->veclen - ready->n_ready - 1,
1401
                     ready_lastpos (ready),
1402
                     ready->n_ready * sizeof (rtx));
1403
          ready->first = ready->veclen - 2;
1404
        }
1405
      ready->vec[++(ready->first)] = insn;
1406
    }
1407
 
1408
  ready->n_ready++;
1409
  if (DEBUG_INSN_P (insn))
1410
    ready->n_debug++;
1411
 
1412
  gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
1413
  QUEUE_INDEX (insn) = QUEUE_READY;
1414
}
1415
 
1416
/* Remove the element with the highest priority from the ready list and
1417
   return it.  */
1418
 
1419
HAIFA_INLINE static rtx
1420
ready_remove_first (struct ready_list *ready)
1421
{
1422
  rtx t;
1423
 
1424
  gcc_assert (ready->n_ready);
1425
  t = ready->vec[ready->first--];
1426
  ready->n_ready--;
1427
  if (DEBUG_INSN_P (t))
1428
    ready->n_debug--;
1429
  /* If the queue becomes empty, reset it.  */
1430
  if (ready->n_ready == 0)
1431
    ready->first = ready->veclen - 1;
1432
 
1433
  gcc_assert (QUEUE_INDEX (t) == QUEUE_READY);
1434
  QUEUE_INDEX (t) = QUEUE_NOWHERE;
1435
 
1436
  return t;
1437
}
1438
 
1439
/* The following code implements multi-pass scheduling for the first
1440
   cycle.  In other words, we will try to choose ready insn which
1441
   permits to start maximum number of insns on the same cycle.  */
1442
 
1443
/* Return a pointer to the element INDEX from the ready.  INDEX for
1444
   insn with the highest priority is 0, and the lowest priority has
1445
   N_READY - 1.  */
1446
 
1447
rtx
1448
ready_element (struct ready_list *ready, int index)
1449
{
1450
  gcc_assert (ready->n_ready && index < ready->n_ready);
1451
 
1452
  return ready->vec[ready->first - index];
1453
}
1454
 
1455
/* Remove the element INDEX from the ready list and return it.  INDEX
1456
   for insn with the highest priority is 0, and the lowest priority
1457
   has N_READY - 1.  */
1458
 
1459
HAIFA_INLINE static rtx
1460
ready_remove (struct ready_list *ready, int index)
1461
{
1462
  rtx t;
1463
  int i;
1464
 
1465
  if (index == 0)
1466
    return ready_remove_first (ready);
1467
  gcc_assert (ready->n_ready && index < ready->n_ready);
1468
  t = ready->vec[ready->first - index];
1469
  ready->n_ready--;
1470
  if (DEBUG_INSN_P (t))
1471
    ready->n_debug--;
1472
  for (i = index; i < ready->n_ready; i++)
1473
    ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
1474
  QUEUE_INDEX (t) = QUEUE_NOWHERE;
1475
  return t;
1476
}
1477
 
1478
/* Remove INSN from the ready list.  */
1479
static void
1480
ready_remove_insn (rtx insn)
1481
{
1482
  int i;
1483
 
1484
  for (i = 0; i < readyp->n_ready; i++)
1485
    if (ready_element (readyp, i) == insn)
1486
      {
1487
        ready_remove (readyp, i);
1488
        return;
1489
      }
1490
  gcc_unreachable ();
1491
}
1492
 
1493
/* Sort the ready list READY by ascending priority, using the SCHED_SORT
1494
   macro.  */
1495
 
1496
void
1497
ready_sort (struct ready_list *ready)
1498
{
1499
  int i;
1500
  rtx *first = ready_lastpos (ready);
1501
 
1502
  if (sched_pressure_p)
1503
    {
1504
      for (i = 0; i < ready->n_ready; i++)
1505
        setup_insn_reg_pressure_info (first[i]);
1506
    }
1507
  SCHED_SORT (first, ready->n_ready);
1508
}
1509
 
1510
/* PREV is an insn that is ready to execute.  Adjust its priority if that
1511
   will help shorten or lengthen register lifetimes as appropriate.  Also
1512
   provide a hook for the target to tweak itself.  */
1513
 
1514
HAIFA_INLINE static void
1515
adjust_priority (rtx prev)
1516
{
1517
  /* ??? There used to be code here to try and estimate how an insn
1518
     affected register lifetimes, but it did it by looking at REG_DEAD
1519
     notes, which we removed in schedule_region.  Nor did it try to
1520
     take into account register pressure or anything useful like that.
1521
 
1522
     Revisit when we have a machine model to work with and not before.  */
1523
 
1524
  if (targetm.sched.adjust_priority)
1525
    INSN_PRIORITY (prev) =
1526
      targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
1527
}
1528
 
1529
/* Advance DFA state STATE on one cycle.  */
1530
void
1531
advance_state (state_t state)
1532
{
1533
  if (targetm.sched.dfa_pre_advance_cycle)
1534
    targetm.sched.dfa_pre_advance_cycle ();
1535
 
1536
  if (targetm.sched.dfa_pre_cycle_insn)
1537
    state_transition (state,
1538
                      targetm.sched.dfa_pre_cycle_insn ());
1539
 
1540
  state_transition (state, NULL);
1541
 
1542
  if (targetm.sched.dfa_post_cycle_insn)
1543
    state_transition (state,
1544
                      targetm.sched.dfa_post_cycle_insn ());
1545
 
1546
  if (targetm.sched.dfa_post_advance_cycle)
1547
    targetm.sched.dfa_post_advance_cycle ();
1548
}
1549
 
1550
/* Advance time on one cycle.  */
1551
HAIFA_INLINE static void
1552
advance_one_cycle (void)
1553
{
1554
  advance_state (curr_state);
1555
  if (sched_verbose >= 6)
1556
    fprintf (sched_dump, ";;\tAdvanced a state.\n");
1557
}
1558
 
1559
/* Clock at which the previous instruction was issued.  */
1560
static int last_clock_var;
1561
 
1562
/* Update register pressure after scheduling INSN.  */
1563
static void
1564
update_register_pressure (rtx insn)
1565
{
1566
  struct reg_use_data *use;
1567
  struct reg_set_data *set;
1568
 
1569
  for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1570
    if (dying_use_p (use) && bitmap_bit_p (curr_reg_live, use->regno))
1571
      mark_regno_birth_or_death (use->regno, false);
1572
  for (set = INSN_REG_SET_LIST (insn); set != NULL; set = set->next_insn_set)
1573
    mark_regno_birth_or_death (set->regno, true);
1574
}
1575
 
1576
/* Set up or update (if UPDATE_P) max register pressure (see its
1577
   meaning in sched-int.h::_haifa_insn_data) for all current BB insns
1578
   after insn AFTER.  */
1579
static void
1580
setup_insn_max_reg_pressure (rtx after, bool update_p)
1581
{
1582
  int i, p;
1583
  bool eq_p;
1584
  rtx insn;
1585
  static int max_reg_pressure[N_REG_CLASSES];
1586
 
1587
  save_reg_pressure ();
1588
  for (i = 0; i < ira_reg_class_cover_size; i++)
1589
    max_reg_pressure[ira_reg_class_cover[i]]
1590
      = curr_reg_pressure[ira_reg_class_cover[i]];
1591
  for (insn = NEXT_INSN (after);
1592
       insn != NULL_RTX && ! BARRIER_P (insn)
1593
         && BLOCK_FOR_INSN (insn) == BLOCK_FOR_INSN (after);
1594
       insn = NEXT_INSN (insn))
1595
    if (NONDEBUG_INSN_P (insn))
1596
      {
1597
        eq_p = true;
1598
        for (i = 0; i < ira_reg_class_cover_size; i++)
1599
          {
1600
            p = max_reg_pressure[ira_reg_class_cover[i]];
1601
            if (INSN_MAX_REG_PRESSURE (insn)[i] != p)
1602
              {
1603
                eq_p = false;
1604
                INSN_MAX_REG_PRESSURE (insn)[i]
1605
                  = max_reg_pressure[ira_reg_class_cover[i]];
1606
              }
1607
          }
1608
        if (update_p && eq_p)
1609
          break;
1610
        update_register_pressure (insn);
1611
        for (i = 0; i < ira_reg_class_cover_size; i++)
1612
          if (max_reg_pressure[ira_reg_class_cover[i]]
1613
              < curr_reg_pressure[ira_reg_class_cover[i]])
1614
            max_reg_pressure[ira_reg_class_cover[i]]
1615
              = curr_reg_pressure[ira_reg_class_cover[i]];
1616
      }
1617
  restore_reg_pressure ();
1618
}
1619
 
1620
/* Update the current register pressure after scheduling INSN.  Update
1621
   also max register pressure for unscheduled insns of the current
1622
   BB.  */
1623
static void
1624
update_reg_and_insn_max_reg_pressure (rtx insn)
1625
{
1626
  int i;
1627
  int before[N_REG_CLASSES];
1628
 
1629
  for (i = 0; i < ira_reg_class_cover_size; i++)
1630
    before[i] = curr_reg_pressure[ira_reg_class_cover[i]];
1631
  update_register_pressure (insn);
1632
  for (i = 0; i < ira_reg_class_cover_size; i++)
1633
    if (curr_reg_pressure[ira_reg_class_cover[i]] != before[i])
1634
      break;
1635
  if (i < ira_reg_class_cover_size)
1636
    setup_insn_max_reg_pressure (insn, true);
1637
}
1638
 
1639
/* Set up register pressure at the beginning of basic block BB whose
1640
   insns starting after insn AFTER.  Set up also max register pressure
1641
   for all insns of the basic block.  */
1642
void
1643
sched_setup_bb_reg_pressure_info (basic_block bb, rtx after)
1644
{
1645
  gcc_assert (sched_pressure_p);
1646
  initiate_bb_reg_pressure_info (bb);
1647
  setup_insn_max_reg_pressure (after, false);
1648
}
1649
 
1650
/* INSN is the "currently executing insn".  Launch each insn which was
1651
   waiting on INSN.  READY is the ready list which contains the insns
1652
   that are ready to fire.  CLOCK is the current cycle.  The function
1653
   returns necessary cycle advance after issuing the insn (it is not
1654
   zero for insns in a schedule group).  */
1655
 
1656
static int
1657
schedule_insn (rtx insn)
1658
{
1659
  sd_iterator_def sd_it;
1660
  dep_t dep;
1661
  int i;
1662
  int advance = 0;
1663
 
1664
  if (sched_verbose >= 1)
1665
    {
1666
      struct reg_pressure_data *pressure_info;
1667
      char buf[2048];
1668
 
1669
      print_insn (buf, insn, 0);
1670
      buf[40] = 0;
1671
      fprintf (sched_dump, ";;\t%3i--> %-40s:", clock_var, buf);
1672
 
1673
      if (recog_memoized (insn) < 0)
1674
        fprintf (sched_dump, "nothing");
1675
      else
1676
        print_reservation (sched_dump, insn);
1677
      pressure_info = INSN_REG_PRESSURE (insn);
1678
      if (pressure_info != NULL)
1679
        {
1680
          fputc (':', sched_dump);
1681
          for (i = 0; i < ira_reg_class_cover_size; i++)
1682
            fprintf (sched_dump, "%s%+d(%d)",
1683
                     reg_class_names[ira_reg_class_cover[i]],
1684
                     pressure_info[i].set_increase, pressure_info[i].change);
1685
        }
1686
      fputc ('\n', sched_dump);
1687
    }
1688
 
1689
  if (sched_pressure_p)
1690
    update_reg_and_insn_max_reg_pressure (insn);
1691
 
1692
  /* Scheduling instruction should have all its dependencies resolved and
1693
     should have been removed from the ready list.  */
1694
  gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
1695
 
1696
  /* Reset debug insns invalidated by moving this insn.  */
1697
  if (MAY_HAVE_DEBUG_INSNS && !DEBUG_INSN_P (insn))
1698
    for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
1699
         sd_iterator_cond (&sd_it, &dep);)
1700
      {
1701
        rtx dbg = DEP_PRO (dep);
1702
        struct reg_use_data *use, *next;
1703
 
1704
        gcc_assert (DEBUG_INSN_P (dbg));
1705
 
1706
        if (sched_verbose >= 6)
1707
          fprintf (sched_dump, ";;\t\tresetting: debug insn %d\n",
1708
                   INSN_UID (dbg));
1709
 
1710
        /* ??? Rather than resetting the debug insn, we might be able
1711
           to emit a debug temp before the just-scheduled insn, but
1712
           this would involve checking that the expression at the
1713
           point of the debug insn is equivalent to the expression
1714
           before the just-scheduled insn.  They might not be: the
1715
           expression in the debug insn may depend on other insns not
1716
           yet scheduled that set MEMs, REGs or even other debug
1717
           insns.  It's not clear that attempting to preserve debug
1718
           information in these cases is worth the effort, given how
1719
           uncommon these resets are and the likelihood that the debug
1720
           temps introduced won't survive the schedule change.  */
1721
        INSN_VAR_LOCATION_LOC (dbg) = gen_rtx_UNKNOWN_VAR_LOC ();
1722
        df_insn_rescan (dbg);
1723
 
1724
        /* Unknown location doesn't use any registers.  */
1725
        for (use = INSN_REG_USE_LIST (dbg); use != NULL; use = next)
1726
          {
1727
            struct reg_use_data *prev = use;
1728
 
1729
            /* Remove use from the cyclic next_regno_use chain first.  */
1730
            while (prev->next_regno_use != use)
1731
              prev = prev->next_regno_use;
1732
            prev->next_regno_use = use->next_regno_use;
1733
            next = use->next_insn_use;
1734
            free (use);
1735
          }
1736
        INSN_REG_USE_LIST (dbg) = NULL;
1737
 
1738
        /* We delete rather than resolve these deps, otherwise we
1739
           crash in sched_free_deps(), because forward deps are
1740
           expected to be released before backward deps.  */
1741
        sd_delete_dep (sd_it);
1742
      }
1743
 
1744
  gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
1745
  QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
1746
 
1747
  gcc_assert (INSN_TICK (insn) >= MIN_TICK);
1748
  if (INSN_TICK (insn) > clock_var)
1749
    /* INSN has been prematurely moved from the queue to the ready list.
1750
       This is possible only if following flag is set.  */
1751
    gcc_assert (flag_sched_stalled_insns);
1752
 
1753
  /* ??? Probably, if INSN is scheduled prematurely, we should leave
1754
     INSN_TICK untouched.  This is a machine-dependent issue, actually.  */
1755
  INSN_TICK (insn) = clock_var;
1756
 
1757
  /* Update dependent instructions.  */
1758
  for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
1759
       sd_iterator_cond (&sd_it, &dep);)
1760
    {
1761
      rtx next = DEP_CON (dep);
1762
 
1763
      /* Resolve the dependence between INSN and NEXT.
1764
         sd_resolve_dep () moves current dep to another list thus
1765
         advancing the iterator.  */
1766
      sd_resolve_dep (sd_it);
1767
 
1768
      /* Don't bother trying to mark next as ready if insn is a debug
1769
         insn.  If insn is the last hard dependency, it will have
1770
         already been discounted.  */
1771
      if (DEBUG_INSN_P (insn) && !DEBUG_INSN_P (next))
1772
        continue;
1773
 
1774
      if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
1775
        {
1776
          int effective_cost;
1777
 
1778
          effective_cost = try_ready (next);
1779
 
1780
          if (effective_cost >= 0
1781
              && SCHED_GROUP_P (next)
1782
              && advance < effective_cost)
1783
            advance = effective_cost;
1784
        }
1785
      else
1786
        /* Check always has only one forward dependence (to the first insn in
1787
           the recovery block), therefore, this will be executed only once.  */
1788
        {
1789
          gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
1790
          fix_recovery_deps (RECOVERY_BLOCK (insn));
1791
        }
1792
    }
1793
 
1794
  /* This is the place where scheduler doesn't *basically* need backward and
1795
     forward dependencies for INSN anymore.  Nevertheless they are used in
1796
     heuristics in rank_for_schedule (), early_queue_to_ready () and in
1797
     some targets (e.g. rs6000).  Thus the earliest place where we *can*
1798
     remove dependencies is after targetm.sched.md_finish () call in
1799
     schedule_block ().  But, on the other side, the safest place to remove
1800
     dependencies is when we are finishing scheduling entire region.  As we
1801
     don't generate [many] dependencies during scheduling itself, we won't
1802
     need memory until beginning of next region.
1803
     Bottom line: Dependencies are removed for all insns in the end of
1804
     scheduling the region.  */
1805
 
1806
  /* Annotate the instruction with issue information -- TImode
1807
     indicates that the instruction is expected not to be able
1808
     to issue on the same cycle as the previous insn.  A machine
1809
     may use this information to decide how the instruction should
1810
     be aligned.  */
1811
  if (issue_rate > 1
1812
      && GET_CODE (PATTERN (insn)) != USE
1813
      && GET_CODE (PATTERN (insn)) != CLOBBER
1814
      && !DEBUG_INSN_P (insn))
1815
    {
1816
      if (reload_completed)
1817
        PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
1818
      last_clock_var = clock_var;
1819
    }
1820
 
1821
  return advance;
1822
}
1823
 
1824
/* Functions for handling of notes.  */
1825
 
1826
/* Add note list that ends on FROM_END to the end of TO_ENDP.  */
1827
void
1828
concat_note_lists (rtx from_end, rtx *to_endp)
1829
{
1830
  rtx from_start;
1831
 
1832
  /* It's easy when have nothing to concat.  */
1833
  if (from_end == NULL)
1834
    return;
1835
 
1836
  /* It's also easy when destination is empty.  */
1837
  if (*to_endp == NULL)
1838
    {
1839
      *to_endp = from_end;
1840
      return;
1841
    }
1842
 
1843
  from_start = from_end;
1844
  while (PREV_INSN (from_start) != NULL)
1845
    from_start = PREV_INSN (from_start);
1846
 
1847
  PREV_INSN (from_start) = *to_endp;
1848
  NEXT_INSN (*to_endp) = from_start;
1849
  *to_endp = from_end;
1850
}
1851
 
1852
/* Delete notes between HEAD and TAIL and put them in the chain
1853
   of notes ended by NOTE_LIST.  */
1854
void
1855
remove_notes (rtx head, rtx tail)
1856
{
1857
  rtx next_tail, insn, next;
1858
 
1859
  note_list = 0;
1860
  if (head == tail && !INSN_P (head))
1861
    return;
1862
 
1863
  next_tail = NEXT_INSN (tail);
1864
  for (insn = head; insn != next_tail; insn = next)
1865
    {
1866
      next = NEXT_INSN (insn);
1867
      if (!NOTE_P (insn))
1868
        continue;
1869
 
1870
      switch (NOTE_KIND (insn))
1871
        {
1872
        case NOTE_INSN_BASIC_BLOCK:
1873
          continue;
1874
 
1875
        case NOTE_INSN_EPILOGUE_BEG:
1876
          if (insn != tail)
1877
            {
1878
              remove_insn (insn);
1879
              add_reg_note (next, REG_SAVE_NOTE,
1880
                            GEN_INT (NOTE_INSN_EPILOGUE_BEG));
1881
              break;
1882
            }
1883
          /* FALLTHRU */
1884
 
1885
        default:
1886
          remove_insn (insn);
1887
 
1888
          /* Add the note to list that ends at NOTE_LIST.  */
1889
          PREV_INSN (insn) = note_list;
1890
          NEXT_INSN (insn) = NULL_RTX;
1891
          if (note_list)
1892
            NEXT_INSN (note_list) = insn;
1893
          note_list = insn;
1894
          break;
1895
        }
1896
 
1897
      gcc_assert ((sel_sched_p () || insn != tail) && insn != head);
1898
    }
1899
}
1900
 
1901
 
1902
/* Return the head and tail pointers of ebb starting at BEG and ending
1903
   at END.  */
1904
void
1905
get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
1906
{
1907
  rtx beg_head = BB_HEAD (beg);
1908
  rtx beg_tail = BB_END (beg);
1909
  rtx end_head = BB_HEAD (end);
1910
  rtx end_tail = BB_END (end);
1911
 
1912
  /* Don't include any notes or labels at the beginning of the BEG
1913
     basic block, or notes at the end of the END basic blocks.  */
1914
 
1915
  if (LABEL_P (beg_head))
1916
    beg_head = NEXT_INSN (beg_head);
1917
 
1918
  while (beg_head != beg_tail)
1919
    if (NOTE_P (beg_head) || BOUNDARY_DEBUG_INSN_P (beg_head))
1920
      beg_head = NEXT_INSN (beg_head);
1921
    else
1922
      break;
1923
 
1924
  *headp = beg_head;
1925
 
1926
  if (beg == end)
1927
    end_head = beg_head;
1928
  else if (LABEL_P (end_head))
1929
    end_head = NEXT_INSN (end_head);
1930
 
1931
  while (end_head != end_tail)
1932
    if (NOTE_P (end_tail) || BOUNDARY_DEBUG_INSN_P (end_tail))
1933
      end_tail = PREV_INSN (end_tail);
1934
    else
1935
      break;
1936
 
1937
  *tailp = end_tail;
1938
}
1939
 
1940
/* Return nonzero if there are no real insns in the range [ HEAD, TAIL ].  */
1941
 
1942
int
1943
no_real_insns_p (const_rtx head, const_rtx tail)
1944
{
1945
  while (head != NEXT_INSN (tail))
1946
    {
1947
      if (!NOTE_P (head) && !LABEL_P (head)
1948
          && !BOUNDARY_DEBUG_INSN_P (head))
1949
        return 0;
1950
      head = NEXT_INSN (head);
1951
    }
1952
  return 1;
1953
}
1954
 
1955
/* Restore-other-notes: NOTE_LIST is the end of a chain of notes
1956
   previously found among the insns.  Insert them just before HEAD.  */
1957
rtx
1958
restore_other_notes (rtx head, basic_block head_bb)
1959
{
1960
  if (note_list != 0)
1961
    {
1962
      rtx note_head = note_list;
1963
 
1964
      if (head)
1965
        head_bb = BLOCK_FOR_INSN (head);
1966
      else
1967
        head = NEXT_INSN (bb_note (head_bb));
1968
 
1969
      while (PREV_INSN (note_head))
1970
        {
1971
          set_block_for_insn (note_head, head_bb);
1972
          note_head = PREV_INSN (note_head);
1973
        }
1974
      /* In the above cycle we've missed this note.  */
1975
      set_block_for_insn (note_head, head_bb);
1976
 
1977
      PREV_INSN (note_head) = PREV_INSN (head);
1978
      NEXT_INSN (PREV_INSN (head)) = note_head;
1979
      PREV_INSN (head) = note_list;
1980
      NEXT_INSN (note_list) = head;
1981
 
1982
      if (BLOCK_FOR_INSN (head) != head_bb)
1983
        BB_END (head_bb) = note_list;
1984
 
1985
      head = note_head;
1986
    }
1987
 
1988
  return head;
1989
}
1990
 
1991
/* Move insns that became ready to fire from queue to ready list.  */
1992
 
1993
static void
1994
queue_to_ready (struct ready_list *ready)
1995
{
1996
  rtx insn;
1997
  rtx link;
1998
  rtx skip_insn;
1999
 
2000
  q_ptr = NEXT_Q (q_ptr);
2001
 
2002
  if (dbg_cnt (sched_insn) == false)
2003 378 julius
    /* If debug counter is activated do not requeue insn next after
2004
       last_scheduled_insn.  */
2005
    skip_insn = next_nonnote_nondebug_insn (last_scheduled_insn);
2006 280 jeremybenn
  else
2007
    skip_insn = NULL_RTX;
2008
 
2009
  /* Add all pending insns that can be scheduled without stalls to the
2010
     ready list.  */
2011
  for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
2012
    {
2013
      insn = XEXP (link, 0);
2014
      q_size -= 1;
2015
 
2016
      if (sched_verbose >= 2)
2017
        fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2018
                 (*current_sched_info->print_insn) (insn, 0));
2019
 
2020
      /* If the ready list is full, delay the insn for 1 cycle.
2021
         See the comment in schedule_block for the rationale.  */
2022
      if (!reload_completed
2023
          && ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS
2024
          && !SCHED_GROUP_P (insn)
2025
          && insn != skip_insn)
2026
        {
2027
          if (sched_verbose >= 2)
2028
            fprintf (sched_dump, "requeued because ready full\n");
2029
          queue_insn (insn, 1);
2030
        }
2031
      else
2032
        {
2033
          ready_add (ready, insn, false);
2034
          if (sched_verbose >= 2)
2035
            fprintf (sched_dump, "moving to ready without stalls\n");
2036
        }
2037
    }
2038
  free_INSN_LIST_list (&insn_queue[q_ptr]);
2039
 
2040
  /* If there are no ready insns, stall until one is ready and add all
2041
     of the pending insns at that point to the ready list.  */
2042
  if (ready->n_ready == 0)
2043
    {
2044
      int stalls;
2045
 
2046
      for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
2047
        {
2048
          if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2049
            {
2050
              for (; link; link = XEXP (link, 1))
2051
                {
2052
                  insn = XEXP (link, 0);
2053
                  q_size -= 1;
2054
 
2055
                  if (sched_verbose >= 2)
2056
                    fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2057
                             (*current_sched_info->print_insn) (insn, 0));
2058
 
2059
                  ready_add (ready, insn, false);
2060
                  if (sched_verbose >= 2)
2061
                    fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
2062
                }
2063
              free_INSN_LIST_list (&insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]);
2064
 
2065
              advance_one_cycle ();
2066
 
2067
              break;
2068
            }
2069
 
2070
          advance_one_cycle ();
2071
        }
2072
 
2073
      q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
2074
      clock_var += stalls;
2075
    }
2076
}
2077
 
2078
/* Used by early_queue_to_ready.  Determines whether it is "ok" to
2079
   prematurely move INSN from the queue to the ready list.  Currently,
2080
   if a target defines the hook 'is_costly_dependence', this function
2081
   uses the hook to check whether there exist any dependences which are
2082
   considered costly by the target, between INSN and other insns that
2083
   have already been scheduled.  Dependences are checked up to Y cycles
2084
   back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
2085
   controlling this value.
2086
   (Other considerations could be taken into account instead (or in
2087
   addition) depending on user flags and target hooks.  */
2088
 
2089
static bool
2090
ok_for_early_queue_removal (rtx insn)
2091
{
2092
  int n_cycles;
2093
  rtx prev_insn = last_scheduled_insn;
2094
 
2095
  if (targetm.sched.is_costly_dependence)
2096
    {
2097
      for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
2098
        {
2099
          for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
2100
            {
2101
              int cost;
2102
 
2103
              if (prev_insn == current_sched_info->prev_head)
2104
                {
2105
                  prev_insn = NULL;
2106
                  break;
2107
                }
2108
 
2109
              if (!NOTE_P (prev_insn))
2110
                {
2111
                  dep_t dep;
2112
 
2113
                  dep = sd_find_dep_between (prev_insn, insn, true);
2114
 
2115
                  if (dep != NULL)
2116
                    {
2117
                      cost = dep_cost (dep);
2118
 
2119
                      if (targetm.sched.is_costly_dependence (dep, cost,
2120
                                flag_sched_stalled_insns_dep - n_cycles))
2121
                        return false;
2122
                    }
2123
                }
2124
 
2125
              if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
2126
                break;
2127
            }
2128
 
2129
          if (!prev_insn)
2130
            break;
2131
          prev_insn = PREV_INSN (prev_insn);
2132
        }
2133
    }
2134
 
2135
  return true;
2136
}
2137
 
2138
 
2139
/* Remove insns from the queue, before they become "ready" with respect
2140
   to FU latency considerations.  */
2141
 
2142
static int
2143
early_queue_to_ready (state_t state, struct ready_list *ready)
2144
{
2145
  rtx insn;
2146
  rtx link;
2147
  rtx next_link;
2148
  rtx prev_link;
2149
  bool move_to_ready;
2150
  int cost;
2151
  state_t temp_state = alloca (dfa_state_size);
2152
  int stalls;
2153
  int insns_removed = 0;
2154
 
2155
  /*
2156
     Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
2157
     function:
2158
 
2159
     X == 0: There is no limit on how many queued insns can be removed
2160
             prematurely.  (flag_sched_stalled_insns = -1).
2161
 
2162
     X >= 1: Only X queued insns can be removed prematurely in each
2163
             invocation.  (flag_sched_stalled_insns = X).
2164
 
2165
     Otherwise: Early queue removal is disabled.
2166
         (flag_sched_stalled_insns = 0)
2167
  */
2168
 
2169
  if (! flag_sched_stalled_insns)
2170
    return 0;
2171
 
2172
  for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
2173
    {
2174
      if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2175
        {
2176
          if (sched_verbose > 6)
2177
            fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
2178
 
2179
          prev_link = 0;
2180
          while (link)
2181
            {
2182
              next_link = XEXP (link, 1);
2183
              insn = XEXP (link, 0);
2184
              if (insn && sched_verbose > 6)
2185
                print_rtl_single (sched_dump, insn);
2186
 
2187
              memcpy (temp_state, state, dfa_state_size);
2188
              if (recog_memoized (insn) < 0)
2189
                /* non-negative to indicate that it's not ready
2190
                   to avoid infinite Q->R->Q->R... */
2191
                cost = 0;
2192
              else
2193
                cost = state_transition (temp_state, insn);
2194
 
2195
              if (sched_verbose >= 6)
2196
                fprintf (sched_dump, "transition cost = %d\n", cost);
2197
 
2198
              move_to_ready = false;
2199
              if (cost < 0)
2200
                {
2201
                  move_to_ready = ok_for_early_queue_removal (insn);
2202
                  if (move_to_ready == true)
2203
                    {
2204
                      /* move from Q to R */
2205
                      q_size -= 1;
2206
                      ready_add (ready, insn, false);
2207
 
2208
                      if (prev_link)
2209
                        XEXP (prev_link, 1) = next_link;
2210
                      else
2211
                        insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
2212
 
2213
                      free_INSN_LIST_node (link);
2214
 
2215
                      if (sched_verbose >= 2)
2216
                        fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
2217
                                 (*current_sched_info->print_insn) (insn, 0));
2218
 
2219
                      insns_removed++;
2220
                      if (insns_removed == flag_sched_stalled_insns)
2221
                        /* Remove no more than flag_sched_stalled_insns insns
2222
                           from Q at a time.  */
2223
                        return insns_removed;
2224
                    }
2225
                }
2226
 
2227
              if (move_to_ready == false)
2228
                prev_link = link;
2229
 
2230
              link = next_link;
2231
            } /* while link */
2232
        } /* if link */
2233
 
2234
    } /* for stalls.. */
2235
 
2236
  return insns_removed;
2237
}
2238
 
2239
 
2240
/* Print the ready list for debugging purposes.  Callable from debugger.  */
2241
 
2242
static void
2243
debug_ready_list (struct ready_list *ready)
2244
{
2245
  rtx *p;
2246
  int i;
2247
 
2248
  if (ready->n_ready == 0)
2249
    {
2250
      fprintf (sched_dump, "\n");
2251
      return;
2252
    }
2253
 
2254
  p = ready_lastpos (ready);
2255
  for (i = 0; i < ready->n_ready; i++)
2256
    {
2257
      fprintf (sched_dump, "  %s:%d",
2258
               (*current_sched_info->print_insn) (p[i], 0),
2259
               INSN_LUID (p[i]));
2260
      if (sched_pressure_p)
2261
        fprintf (sched_dump, "(cost=%d",
2262
                 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (p[i]));
2263
      if (INSN_TICK (p[i]) > clock_var)
2264
        fprintf (sched_dump, ":delay=%d", INSN_TICK (p[i]) - clock_var);
2265
      if (sched_pressure_p)
2266
        fprintf (sched_dump, ")");
2267
    }
2268
  fprintf (sched_dump, "\n");
2269
}
2270
 
2271
/* Search INSN for REG_SAVE_NOTE notes and convert them back into insn
2272
   NOTEs.  This is used for NOTE_INSN_EPILOGUE_BEG, so that sched-ebb
2273
   replaces the epilogue note in the correct basic block.  */
2274
void
2275
reemit_notes (rtx insn)
2276
{
2277
  rtx note, last = insn;
2278
 
2279
  for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2280
    {
2281
      if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
2282
        {
2283
          enum insn_note note_type = (enum insn_note) INTVAL (XEXP (note, 0));
2284
 
2285
          last = emit_note_before (note_type, last);
2286
          remove_note (insn, note);
2287
        }
2288
    }
2289
}
2290
 
2291
/* Move INSN.  Reemit notes if needed.  Update CFG, if needed.  */
2292
static void
2293
move_insn (rtx insn, rtx last, rtx nt)
2294
{
2295
  if (PREV_INSN (insn) != last)
2296
    {
2297
      basic_block bb;
2298
      rtx note;
2299
      int jump_p = 0;
2300
 
2301
      bb = BLOCK_FOR_INSN (insn);
2302
 
2303
      /* BB_HEAD is either LABEL or NOTE.  */
2304
      gcc_assert (BB_HEAD (bb) != insn);
2305
 
2306
      if (BB_END (bb) == insn)
2307
        /* If this is last instruction in BB, move end marker one
2308
           instruction up.  */
2309
        {
2310
          /* Jumps are always placed at the end of basic block.  */
2311
          jump_p = control_flow_insn_p (insn);
2312
 
2313
          gcc_assert (!jump_p
2314
                      || ((common_sched_info->sched_pass_id == SCHED_RGN_PASS)
2315
                          && IS_SPECULATION_BRANCHY_CHECK_P (insn))
2316
                      || (common_sched_info->sched_pass_id
2317
                          == SCHED_EBB_PASS));
2318
 
2319
          gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);
2320
 
2321
          BB_END (bb) = PREV_INSN (insn);
2322
        }
2323
 
2324
      gcc_assert (BB_END (bb) != last);
2325
 
2326
      if (jump_p)
2327
        /* We move the block note along with jump.  */
2328
        {
2329
          gcc_assert (nt);
2330
 
2331
          note = NEXT_INSN (insn);
2332
          while (NOTE_NOT_BB_P (note) && note != nt)
2333
            note = NEXT_INSN (note);
2334
 
2335
          if (note != nt
2336
              && (LABEL_P (note)
2337
                  || BARRIER_P (note)))
2338
            note = NEXT_INSN (note);
2339
 
2340
          gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
2341
        }
2342
      else
2343
        note = insn;
2344
 
2345
      NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (note);
2346
      PREV_INSN (NEXT_INSN (note)) = PREV_INSN (insn);
2347
 
2348
      NEXT_INSN (note) = NEXT_INSN (last);
2349
      PREV_INSN (NEXT_INSN (last)) = note;
2350
 
2351
      NEXT_INSN (last) = insn;
2352
      PREV_INSN (insn) = last;
2353
 
2354
      bb = BLOCK_FOR_INSN (last);
2355
 
2356
      if (jump_p)
2357
        {
2358
          fix_jump_move (insn);
2359
 
2360
          if (BLOCK_FOR_INSN (insn) != bb)
2361
            move_block_after_check (insn);
2362
 
2363
          gcc_assert (BB_END (bb) == last);
2364
        }
2365
 
2366
      df_insn_change_bb (insn, bb);
2367
 
2368
      /* Update BB_END, if needed.  */
2369
      if (BB_END (bb) == last)
2370
        BB_END (bb) = insn;
2371
    }
2372
 
2373
  SCHED_GROUP_P (insn) = 0;
2374
}
2375
 
2376
/* Return true if scheduling INSN will finish current clock cycle.  */
2377
static bool
2378
insn_finishes_cycle_p (rtx insn)
2379
{
2380
  if (SCHED_GROUP_P (insn))
2381
    /* After issuing INSN, rest of the sched_group will be forced to issue
2382
       in order.  Don't make any plans for the rest of cycle.  */
2383
    return true;
2384
 
2385
  /* Finishing the block will, apparently, finish the cycle.  */
2386
  if (current_sched_info->insn_finishes_block_p
2387
      && current_sched_info->insn_finishes_block_p (insn))
2388
    return true;
2389
 
2390
  return false;
2391
}
2392
 
2393
/* The following structure describe an entry of the stack of choices.  */
2394
struct choice_entry
2395
{
2396
  /* Ordinal number of the issued insn in the ready queue.  */
2397
  int index;
2398
  /* The number of the rest insns whose issues we should try.  */
2399
  int rest;
2400
  /* The number of issued essential insns.  */
2401
  int n;
2402
  /* State after issuing the insn.  */
2403
  state_t state;
2404
};
2405
 
2406
/* The following array is used to implement a stack of choices used in
2407
   function max_issue.  */
2408
static struct choice_entry *choice_stack;
2409
 
2410
/* The following variable value is number of essential insns issued on
2411
   the current cycle.  An insn is essential one if it changes the
2412
   processors state.  */
2413
int cycle_issued_insns;
2414
 
2415
/* This holds the value of the target dfa_lookahead hook.  */
2416
int dfa_lookahead;
2417
 
2418
/* The following variable value is maximal number of tries of issuing
2419
   insns for the first cycle multipass insn scheduling.  We define
2420
   this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE).  We would not
2421
   need this constraint if all real insns (with non-negative codes)
2422
   had reservations because in this case the algorithm complexity is
2423
   O(DFA_LOOKAHEAD**ISSUE_RATE).  Unfortunately, the dfa descriptions
2424
   might be incomplete and such insn might occur.  For such
2425
   descriptions, the complexity of algorithm (without the constraint)
2426
   could achieve DFA_LOOKAHEAD ** N , where N is the queue length.  */
2427
static int max_lookahead_tries;
2428
 
2429
/* The following value is value of hook
2430
   `first_cycle_multipass_dfa_lookahead' at the last call of
2431
   `max_issue'.  */
2432
static int cached_first_cycle_multipass_dfa_lookahead = 0;
2433
 
2434
/* The following value is value of `issue_rate' at the last call of
2435
   `sched_init'.  */
2436
static int cached_issue_rate = 0;
2437
 
2438
/* The following function returns maximal (or close to maximal) number
2439
   of insns which can be issued on the same cycle and one of which
2440
   insns is insns with the best rank (the first insn in READY).  To
2441
   make this function tries different samples of ready insns.  READY
2442
   is current queue `ready'.  Global array READY_TRY reflects what
2443
   insns are already issued in this try.  MAX_POINTS is the sum of points
2444
   of all instructions in READY.  The function stops immediately,
2445
   if it reached the such a solution, that all instruction can be issued.
2446
   INDEX will contain index of the best insn in READY.  The following
2447
   function is used only for first cycle multipass scheduling.
2448
 
2449
   PRIVILEGED_N >= 0
2450
 
2451
   This function expects recognized insns only.  All USEs,
2452
   CLOBBERs, etc must be filtered elsewhere.  */
2453
int
2454
max_issue (struct ready_list *ready, int privileged_n, state_t state,
2455
           int *index)
2456
{
2457
  int n, i, all, n_ready, best, delay, tries_num, max_points;
2458
  int more_issue;
2459
  struct choice_entry *top;
2460
  rtx insn;
2461
 
2462
  n_ready = ready->n_ready;
2463
  gcc_assert (dfa_lookahead >= 1 && privileged_n >= 0
2464
              && privileged_n <= n_ready);
2465
 
2466
  /* Init MAX_LOOKAHEAD_TRIES.  */
2467
  if (cached_first_cycle_multipass_dfa_lookahead != dfa_lookahead)
2468
    {
2469
      cached_first_cycle_multipass_dfa_lookahead = dfa_lookahead;
2470
      max_lookahead_tries = 100;
2471
      for (i = 0; i < issue_rate; i++)
2472
        max_lookahead_tries *= dfa_lookahead;
2473
    }
2474
 
2475
  /* Init max_points.  */
2476
  max_points = 0;
2477
  more_issue = issue_rate - cycle_issued_insns;
2478
 
2479
  /* ??? We used to assert here that we never issue more insns than issue_rate.
2480
     However, some targets (e.g. MIPS/SB1) claim lower issue rate than can be
2481
     achieved to get better performance.  Until these targets are fixed to use
2482
     scheduler hooks to manipulate insns priority instead, the assert should
2483
     be disabled.
2484
 
2485
     gcc_assert (more_issue >= 0);  */
2486
 
2487
  for (i = 0; i < n_ready; i++)
2488
    if (!ready_try [i])
2489
      {
2490
        if (more_issue-- > 0)
2491
          max_points += ISSUE_POINTS (ready_element (ready, i));
2492
        else
2493
          break;
2494
      }
2495
 
2496
  /* The number of the issued insns in the best solution.  */
2497
  best = 0;
2498
 
2499
  top = choice_stack;
2500
 
2501
  /* Set initial state of the search.  */
2502
  memcpy (top->state, state, dfa_state_size);
2503
  top->rest = dfa_lookahead;
2504
  top->n = 0;
2505
 
2506
  /* Count the number of the insns to search among.  */
2507
  for (all = i = 0; i < n_ready; i++)
2508
    if (!ready_try [i])
2509
      all++;
2510
 
2511
  /* I is the index of the insn to try next.  */
2512
  i = 0;
2513
  tries_num = 0;
2514
  for (;;)
2515
    {
2516
      if (/* If we've reached a dead end or searched enough of what we have
2517
             been asked...  */
2518
          top->rest == 0
2519
          /* Or have nothing else to try.  */
2520
          || i >= n_ready)
2521
        {
2522
          /* ??? (... || i == n_ready).  */
2523
          gcc_assert (i <= n_ready);
2524
 
2525
          if (top == choice_stack)
2526
            break;
2527
 
2528
          if (best < top - choice_stack)
2529
            {
2530
              if (privileged_n)
2531
                {
2532
                  n = privileged_n;
2533
                  /* Try to find issued privileged insn.  */
2534
                  while (n && !ready_try[--n]);
2535
                }
2536
 
2537
              if (/* If all insns are equally good...  */
2538
                  privileged_n == 0
2539
                  /* Or a privileged insn will be issued.  */
2540
                  || ready_try[n])
2541
                /* Then we have a solution.  */
2542
                {
2543
                  best = top - choice_stack;
2544
                  /* This is the index of the insn issued first in this
2545
                     solution.  */
2546
                  *index = choice_stack [1].index;
2547
                  if (top->n == max_points || best == all)
2548
                    break;
2549
                }
2550
            }
2551
 
2552
          /* Set ready-list index to point to the last insn
2553
             ('i++' below will advance it to the next insn).  */
2554
          i = top->index;
2555
 
2556
          /* Backtrack.  */
2557
          ready_try [i] = 0;
2558
          top--;
2559
          memcpy (state, top->state, dfa_state_size);
2560
        }
2561
      else if (!ready_try [i])
2562
        {
2563
          tries_num++;
2564
          if (tries_num > max_lookahead_tries)
2565
            break;
2566
          insn = ready_element (ready, i);
2567
          delay = state_transition (state, insn);
2568
          if (delay < 0)
2569
            {
2570
              if (state_dead_lock_p (state)
2571
                  || insn_finishes_cycle_p (insn))
2572
                /* We won't issue any more instructions in the next
2573
                   choice_state.  */
2574
                top->rest = 0;
2575
              else
2576
                top->rest--;
2577
 
2578
              n = top->n;
2579
              if (memcmp (top->state, state, dfa_state_size) != 0)
2580
                n += ISSUE_POINTS (insn);
2581
 
2582
              /* Advance to the next choice_entry.  */
2583
              top++;
2584
              /* Initialize it.  */
2585
              top->rest = dfa_lookahead;
2586
              top->index = i;
2587
              top->n = n;
2588
              memcpy (top->state, state, dfa_state_size);
2589
 
2590
              ready_try [i] = 1;
2591
              i = -1;
2592
            }
2593
        }
2594
 
2595
      /* Increase ready-list index.  */
2596
      i++;
2597
    }
2598
 
2599
  /* Restore the original state of the DFA.  */
2600
  memcpy (state, choice_stack->state, dfa_state_size);
2601
 
2602
  return best;
2603
}
2604
 
2605
/* The following function chooses insn from READY and modifies
2606
   READY.  The following function is used only for first
2607
   cycle multipass scheduling.
2608
   Return:
2609
   -1 if cycle should be advanced,
2610
 
2611
   1 if choose_ready () should be restarted without advancing the cycle.  */
2612
static int
2613
choose_ready (struct ready_list *ready, rtx *insn_ptr)
2614
{
2615
  int lookahead;
2616
 
2617
  if (dbg_cnt (sched_insn) == false)
2618
    {
2619
      rtx insn;
2620
 
2621
      insn = next_nonnote_insn (last_scheduled_insn);
2622
 
2623
      if (QUEUE_INDEX (insn) == QUEUE_READY)
2624
        /* INSN is in the ready_list.  */
2625
        {
2626
          ready_remove_insn (insn);
2627
          *insn_ptr = insn;
2628
          return 0;
2629
        }
2630
 
2631
      /* INSN is in the queue.  Advance cycle to move it to the ready list.  */
2632
      return -1;
2633
    }
2634
 
2635
  lookahead = 0;
2636
 
2637
  if (targetm.sched.first_cycle_multipass_dfa_lookahead)
2638
    lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
2639
  if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))
2640
      || DEBUG_INSN_P (ready_element (ready, 0)))
2641
    {
2642
      *insn_ptr = ready_remove_first (ready);
2643
      return 0;
2644
    }
2645
  else
2646
    {
2647
      /* Try to choose the better insn.  */
2648
      int index = 0, i, n;
2649
      rtx insn;
2650
      int try_data = 1, try_control = 1;
2651
      ds_t ts;
2652
 
2653
      insn = ready_element (ready, 0);
2654
      if (INSN_CODE (insn) < 0)
2655
        {
2656
          *insn_ptr = ready_remove_first (ready);
2657
          return 0;
2658
        }
2659
 
2660
      if (spec_info
2661
          && spec_info->flags & (PREFER_NON_DATA_SPEC
2662
                                 | PREFER_NON_CONTROL_SPEC))
2663
        {
2664
          for (i = 0, n = ready->n_ready; i < n; i++)
2665
            {
2666
              rtx x;
2667
              ds_t s;
2668
 
2669
              x = ready_element (ready, i);
2670
              s = TODO_SPEC (x);
2671
 
2672
              if (spec_info->flags & PREFER_NON_DATA_SPEC
2673
                  && !(s & DATA_SPEC))
2674
                {
2675
                  try_data = 0;
2676
                  if (!(spec_info->flags & PREFER_NON_CONTROL_SPEC)
2677
                      || !try_control)
2678
                    break;
2679
                }
2680
 
2681
              if (spec_info->flags & PREFER_NON_CONTROL_SPEC
2682
                  && !(s & CONTROL_SPEC))
2683
                {
2684
                  try_control = 0;
2685
                  if (!(spec_info->flags & PREFER_NON_DATA_SPEC) || !try_data)
2686
                    break;
2687
                }
2688
            }
2689
        }
2690
 
2691
      ts = TODO_SPEC (insn);
2692
      if ((ts & SPECULATIVE)
2693
          && (((!try_data && (ts & DATA_SPEC))
2694
               || (!try_control && (ts & CONTROL_SPEC)))
2695
              || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
2696
                  && !targetm.sched
2697
                  .first_cycle_multipass_dfa_lookahead_guard_spec (insn))))
2698
        /* Discard speculative instruction that stands first in the ready
2699
           list.  */
2700
        {
2701
          change_queue_index (insn, 1);
2702
          return 1;
2703
        }
2704
 
2705
      ready_try[0] = 0;
2706
 
2707
      for (i = 1; i < ready->n_ready; i++)
2708
        {
2709
          insn = ready_element (ready, i);
2710
 
2711
          ready_try [i]
2712
            = ((!try_data && (TODO_SPEC (insn) & DATA_SPEC))
2713
               || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC)));
2714
        }
2715
 
2716
      /* Let the target filter the search space.  */
2717
      for (i = 1; i < ready->n_ready; i++)
2718
        if (!ready_try[i])
2719
          {
2720
            insn = ready_element (ready, i);
2721
 
2722
#ifdef ENABLE_CHECKING
2723
            /* If this insn is recognizable we should have already
2724
               recognized it earlier.
2725
               ??? Not very clear where this is supposed to be done.
2726
               See dep_cost_1.  */
2727
            gcc_assert (INSN_CODE (insn) >= 0
2728
                        || recog_memoized (insn) < 0);
2729
#endif
2730
 
2731
            ready_try [i]
2732
              = (/* INSN_CODE check can be omitted here as it is also done later
2733
                    in max_issue ().  */
2734
                 INSN_CODE (insn) < 0
2735
                 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2736
                     && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2737
                     (insn)));
2738
          }
2739
 
2740
      if (max_issue (ready, 1, curr_state, &index) == 0)
2741
        {
2742
          *insn_ptr = ready_remove_first (ready);
2743
          if (sched_verbose >= 4)
2744
            fprintf (sched_dump, ";;\t\tChosen insn (but can't issue) : %s \n",
2745
                     (*current_sched_info->print_insn) (*insn_ptr, 0));
2746
          return 0;
2747
        }
2748
      else
2749
        {
2750
          if (sched_verbose >= 4)
2751
            fprintf (sched_dump, ";;\t\tChosen insn : %s\n",
2752
                     (*current_sched_info->print_insn)
2753
                     (ready_element (ready, index), 0));
2754
 
2755
          *insn_ptr = ready_remove (ready, index);
2756
          return 0;
2757
        }
2758
    }
2759
}
2760
 
2761
/* Use forward list scheduling to rearrange insns of block pointed to by
2762
   TARGET_BB, possibly bringing insns from subsequent blocks in the same
2763
   region.  */
2764
 
2765
void
2766
schedule_block (basic_block *target_bb)
2767
{
2768
  int i, first_cycle_insn_p;
2769
  int can_issue_more;
2770
  state_t temp_state = NULL;  /* It is used for multipass scheduling.  */
2771
  int sort_p, advance, start_clock_var;
2772
 
2773
  /* Head/tail info for this block.  */
2774
  rtx prev_head = current_sched_info->prev_head;
2775
  rtx next_tail = current_sched_info->next_tail;
2776
  rtx head = NEXT_INSN (prev_head);
2777
  rtx tail = PREV_INSN (next_tail);
2778
 
2779
  /* We used to have code to avoid getting parameters moved from hard
2780
     argument registers into pseudos.
2781
 
2782
     However, it was removed when it proved to be of marginal benefit
2783
     and caused problems because schedule_block and compute_forward_dependences
2784
     had different notions of what the "head" insn was.  */
2785
 
2786
  gcc_assert (head != tail || INSN_P (head));
2787
 
2788
  haifa_recovery_bb_recently_added_p = false;
2789
 
2790
  /* Debug info.  */
2791
  if (sched_verbose)
2792
    dump_new_block_header (0, *target_bb, head, tail);
2793
 
2794
  state_reset (curr_state);
2795
 
2796
  /* Clear the ready list.  */
2797
  ready.first = ready.veclen - 1;
2798
  ready.n_ready = 0;
2799
  ready.n_debug = 0;
2800
 
2801
  /* It is used for first cycle multipass scheduling.  */
2802
  temp_state = alloca (dfa_state_size);
2803
 
2804
  if (targetm.sched.md_init)
2805
    targetm.sched.md_init (sched_dump, sched_verbose, ready.veclen);
2806
 
2807
  /* We start inserting insns after PREV_HEAD.  */
2808
  last_scheduled_insn = prev_head;
2809
 
2810
  gcc_assert ((NOTE_P (last_scheduled_insn)
2811
               || BOUNDARY_DEBUG_INSN_P (last_scheduled_insn))
2812
              && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
2813
 
2814
  /* Initialize INSN_QUEUE.  Q_SIZE is the total number of insns in the
2815
     queue.  */
2816
  q_ptr = 0;
2817
  q_size = 0;
2818
 
2819
  insn_queue = XALLOCAVEC (rtx, max_insn_queue_index + 1);
2820
  memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
2821
 
2822
  /* Start just before the beginning of time.  */
2823
  clock_var = -1;
2824
 
2825
  /* We need queue and ready lists and clock_var be initialized
2826
     in try_ready () (which is called through init_ready_list ()).  */
2827
  (*current_sched_info->init_ready_list) ();
2828
 
2829
  /* The algorithm is O(n^2) in the number of ready insns at any given
2830
     time in the worst case.  Before reload we are more likely to have
2831
     big lists so truncate them to a reasonable size.  */
2832
  if (!reload_completed
2833
      && ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS)
2834
    {
2835
      ready_sort (&ready);
2836
 
2837
      /* Find first free-standing insn past MAX_SCHED_READY_INSNS.
2838
         If there are debug insns, we know they're first.  */
2839
      for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++)
2840
        if (!SCHED_GROUP_P (ready_element (&ready, i)))
2841
          break;
2842
 
2843
      if (sched_verbose >= 2)
2844
        {
2845
          fprintf (sched_dump,
2846
                   ";;\t\tReady list on entry: %d insns\n", ready.n_ready);
2847
          fprintf (sched_dump,
2848
                   ";;\t\t before reload => truncated to %d insns\n", i);
2849
        }
2850
 
2851
      /* Delay all insns past it for 1 cycle.  If debug counter is
2852
         activated make an exception for the insn right after
2853
         last_scheduled_insn.  */
2854
      {
2855
        rtx skip_insn;
2856
 
2857
        if (dbg_cnt (sched_insn) == false)
2858
          skip_insn = next_nonnote_insn (last_scheduled_insn);
2859
        else
2860
          skip_insn = NULL_RTX;
2861
 
2862
        while (i < ready.n_ready)
2863
          {
2864
            rtx insn;
2865
 
2866
            insn = ready_remove (&ready, i);
2867
 
2868
            if (insn != skip_insn)
2869
              queue_insn (insn, 1);
2870
          }
2871
      }
2872
    }
2873
 
2874
  /* Now we can restore basic block notes and maintain precise cfg.  */
2875
  restore_bb_notes (*target_bb);
2876
 
2877
  last_clock_var = -1;
2878
 
2879
  advance = 0;
2880
 
2881
  sort_p = TRUE;
2882
  /* Loop until all the insns in BB are scheduled.  */
2883
  while ((*current_sched_info->schedule_more_p) ())
2884
    {
2885
      do
2886
        {
2887
          start_clock_var = clock_var;
2888
 
2889
          clock_var++;
2890
 
2891
          advance_one_cycle ();
2892
 
2893
          /* Add to the ready list all pending insns that can be issued now.
2894
             If there are no ready insns, increment clock until one
2895
             is ready and add all pending insns at that point to the ready
2896
             list.  */
2897
          queue_to_ready (&ready);
2898
 
2899
          gcc_assert (ready.n_ready);
2900
 
2901
          if (sched_verbose >= 2)
2902
            {
2903
              fprintf (sched_dump, ";;\t\tReady list after queue_to_ready:  ");
2904
              debug_ready_list (&ready);
2905
            }
2906
          advance -= clock_var - start_clock_var;
2907
        }
2908
      while (advance > 0);
2909
 
2910
      if (sort_p)
2911
        {
2912
          /* Sort the ready list based on priority.  */
2913
          ready_sort (&ready);
2914
 
2915
          if (sched_verbose >= 2)
2916
            {
2917
              fprintf (sched_dump, ";;\t\tReady list after ready_sort:  ");
2918
              debug_ready_list (&ready);
2919
            }
2920
        }
2921
 
2922
      /* We don't want md sched reorder to even see debug isns, so put
2923
         them out right away.  */
2924
      if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2925
        {
2926
          if (control_flow_insn_p (last_scheduled_insn))
2927
            {
2928
              *target_bb = current_sched_info->advance_target_bb
2929
                (*target_bb, 0);
2930
 
2931
              if (sched_verbose)
2932
                {
2933
                  rtx x;
2934
 
2935
                  x = next_real_insn (last_scheduled_insn);
2936
                  gcc_assert (x);
2937
                  dump_new_block_header (1, *target_bb, x, tail);
2938
                }
2939
 
2940
              last_scheduled_insn = bb_note (*target_bb);
2941
            }
2942
 
2943
          while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2944
            {
2945
              rtx insn = ready_remove_first (&ready);
2946
              gcc_assert (DEBUG_INSN_P (insn));
2947
              (*current_sched_info->begin_schedule_ready) (insn,
2948
                                                           last_scheduled_insn);
2949
              move_insn (insn, last_scheduled_insn,
2950
                         current_sched_info->next_tail);
2951
              last_scheduled_insn = insn;
2952
              advance = schedule_insn (insn);
2953
              gcc_assert (advance == 0);
2954
              if (ready.n_ready > 0)
2955
                ready_sort (&ready);
2956
            }
2957
 
2958
          if (!ready.n_ready)
2959
            continue;
2960
        }
2961
 
2962
      /* Allow the target to reorder the list, typically for
2963
         better instruction bundling.  */
2964
      if (sort_p && targetm.sched.reorder
2965
          && (ready.n_ready == 0
2966
              || !SCHED_GROUP_P (ready_element (&ready, 0))))
2967
        can_issue_more =
2968
          targetm.sched.reorder (sched_dump, sched_verbose,
2969
                                 ready_lastpos (&ready),
2970
                                 &ready.n_ready, clock_var);
2971
      else
2972
        can_issue_more = issue_rate;
2973
 
2974
      first_cycle_insn_p = 1;
2975
      cycle_issued_insns = 0;
2976
      for (;;)
2977
        {
2978
          rtx insn;
2979
          int cost;
2980
          bool asm_p = false;
2981
 
2982
          if (sched_verbose >= 2)
2983
            {
2984
              fprintf (sched_dump, ";;\tReady list (t = %3d):  ",
2985
                       clock_var);
2986
              debug_ready_list (&ready);
2987
              if (sched_pressure_p)
2988
                print_curr_reg_pressure ();
2989
            }
2990
 
2991
          if (ready.n_ready == 0
2992
              && can_issue_more
2993
              && reload_completed)
2994
            {
2995
              /* Allow scheduling insns directly from the queue in case
2996
                 there's nothing better to do (ready list is empty) but
2997
                 there are still vacant dispatch slots in the current cycle.  */
2998
              if (sched_verbose >= 6)
2999
                fprintf (sched_dump,";;\t\tSecond chance\n");
3000
              memcpy (temp_state, curr_state, dfa_state_size);
3001
              if (early_queue_to_ready (temp_state, &ready))
3002
                ready_sort (&ready);
3003
            }
3004
 
3005
          if (ready.n_ready == 0
3006
              || !can_issue_more
3007
              || state_dead_lock_p (curr_state)
3008
              || !(*current_sched_info->schedule_more_p) ())
3009
            break;
3010
 
3011
          /* Select and remove the insn from the ready list.  */
3012
          if (sort_p)
3013
            {
3014
              int res;
3015
 
3016
              insn = NULL_RTX;
3017
              res = choose_ready (&ready, &insn);
3018
 
3019
              if (res < 0)
3020
                /* Finish cycle.  */
3021
                break;
3022
              if (res > 0)
3023
                /* Restart choose_ready ().  */
3024
                continue;
3025
 
3026
              gcc_assert (insn != NULL_RTX);
3027
            }
3028
          else
3029
            insn = ready_remove_first (&ready);
3030
 
3031
          if (sched_pressure_p && INSN_TICK (insn) > clock_var)
3032
            {
3033
              ready_add (&ready, insn, true);
3034
              advance = 1;
3035
              break;
3036
            }
3037
 
3038
          if (targetm.sched.dfa_new_cycle
3039
              && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
3040
                                              insn, last_clock_var,
3041
                                              clock_var, &sort_p))
3042
            /* SORT_P is used by the target to override sorting
3043
               of the ready list.  This is needed when the target
3044
               has modified its internal structures expecting that
3045
               the insn will be issued next.  As we need the insn
3046
               to have the highest priority (so it will be returned by
3047
               the ready_remove_first call above), we invoke
3048
               ready_add (&ready, insn, true).
3049
               But, still, there is one issue: INSN can be later
3050
               discarded by scheduler's front end through
3051
               current_sched_info->can_schedule_ready_p, hence, won't
3052
               be issued next.  */
3053
            {
3054
              ready_add (&ready, insn, true);
3055
              break;
3056
            }
3057
 
3058
          sort_p = TRUE;
3059
          memcpy (temp_state, curr_state, dfa_state_size);
3060
          if (recog_memoized (insn) < 0)
3061
            {
3062
              asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
3063
                       || asm_noperands (PATTERN (insn)) >= 0);
3064
              if (!first_cycle_insn_p && asm_p)
3065
                /* This is asm insn which is tried to be issued on the
3066
                   cycle not first.  Issue it on the next cycle.  */
3067
                cost = 1;
3068
              else
3069
                /* A USE insn, or something else we don't need to
3070
                   understand.  We can't pass these directly to
3071
                   state_transition because it will trigger a
3072
                   fatal error for unrecognizable insns.  */
3073
                cost = 0;
3074
            }
3075
          else if (sched_pressure_p)
3076
            cost = 0;
3077
          else
3078
            {
3079
              cost = state_transition (temp_state, insn);
3080
              if (cost < 0)
3081
                cost = 0;
3082
              else if (cost == 0)
3083
                cost = 1;
3084
            }
3085
 
3086
          if (cost >= 1)
3087
            {
3088
              queue_insn (insn, cost);
3089
              if (SCHED_GROUP_P (insn))
3090
                {
3091
                  advance = cost;
3092
                  break;
3093
                }
3094
 
3095
              continue;
3096
            }
3097
 
3098
          if (current_sched_info->can_schedule_ready_p
3099
              && ! (*current_sched_info->can_schedule_ready_p) (insn))
3100
            /* We normally get here only if we don't want to move
3101
               insn from the split block.  */
3102
            {
3103
              TODO_SPEC (insn) = (TODO_SPEC (insn) & ~SPECULATIVE) | HARD_DEP;
3104
              continue;
3105
            }
3106
 
3107
          /* DECISION is made.  */
3108
 
3109
          if (TODO_SPEC (insn) & SPECULATIVE)
3110
            generate_recovery_code (insn);
3111
 
3112
          if (control_flow_insn_p (last_scheduled_insn)
3113
              /* This is used to switch basic blocks by request
3114
                 from scheduler front-end (actually, sched-ebb.c only).
3115
                 This is used to process blocks with single fallthru
3116
                 edge.  If succeeding block has jump, it [jump] will try
3117
                 move at the end of current bb, thus corrupting CFG.  */
3118
              || current_sched_info->advance_target_bb (*target_bb, insn))
3119
            {
3120
              *target_bb = current_sched_info->advance_target_bb
3121
                (*target_bb, 0);
3122
 
3123
              if (sched_verbose)
3124
                {
3125
                  rtx x;
3126
 
3127
                  x = next_real_insn (last_scheduled_insn);
3128
                  gcc_assert (x);
3129
                  dump_new_block_header (1, *target_bb, x, tail);
3130
                }
3131
 
3132
              last_scheduled_insn = bb_note (*target_bb);
3133
            }
3134
 
3135
          /* Update counters, etc in the scheduler's front end.  */
3136
          (*current_sched_info->begin_schedule_ready) (insn,
3137
                                                       last_scheduled_insn);
3138
 
3139
          move_insn (insn, last_scheduled_insn, current_sched_info->next_tail);
3140
          reemit_notes (insn);
3141
          last_scheduled_insn = insn;
3142
 
3143
          if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
3144
            {
3145
              cycle_issued_insns++;
3146
              memcpy (curr_state, temp_state, dfa_state_size);
3147
            }
3148
 
3149
          if (targetm.sched.variable_issue)
3150
            can_issue_more =
3151
              targetm.sched.variable_issue (sched_dump, sched_verbose,
3152
                                            insn, can_issue_more);
3153
          /* A naked CLOBBER or USE generates no instruction, so do
3154
             not count them against the issue rate.  */
3155
          else if (GET_CODE (PATTERN (insn)) != USE
3156
                   && GET_CODE (PATTERN (insn)) != CLOBBER)
3157
            can_issue_more--;
3158
          advance = schedule_insn (insn);
3159
 
3160
          /* After issuing an asm insn we should start a new cycle.  */
3161
          if (advance == 0 && asm_p)
3162
            advance = 1;
3163
          if (advance != 0)
3164
            break;
3165
 
3166
          first_cycle_insn_p = 0;
3167
 
3168
          /* Sort the ready list based on priority.  This must be
3169
             redone here, as schedule_insn may have readied additional
3170
             insns that will not be sorted correctly.  */
3171
          if (ready.n_ready > 0)
3172
            ready_sort (&ready);
3173
 
3174
          /* Quickly go through debug insns such that md sched
3175
             reorder2 doesn't have to deal with debug insns.  */
3176
          if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))
3177
              && (*current_sched_info->schedule_more_p) ())
3178
            {
3179
              if (control_flow_insn_p (last_scheduled_insn))
3180
                {
3181
                  *target_bb = current_sched_info->advance_target_bb
3182
                    (*target_bb, 0);
3183
 
3184
                  if (sched_verbose)
3185
                    {
3186
                      rtx x;
3187
 
3188
                      x = next_real_insn (last_scheduled_insn);
3189
                      gcc_assert (x);
3190
                      dump_new_block_header (1, *target_bb, x, tail);
3191
                    }
3192
 
3193
                  last_scheduled_insn = bb_note (*target_bb);
3194
                }
3195
 
3196
              while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
3197
                {
3198
                  insn = ready_remove_first (&ready);
3199
                  gcc_assert (DEBUG_INSN_P (insn));
3200
                  (*current_sched_info->begin_schedule_ready)
3201
                    (insn, last_scheduled_insn);
3202
                  move_insn (insn, last_scheduled_insn,
3203
                             current_sched_info->next_tail);
3204
                  advance = schedule_insn (insn);
3205
                  last_scheduled_insn = insn;
3206
                  gcc_assert (advance == 0);
3207
                  if (ready.n_ready > 0)
3208
                    ready_sort (&ready);
3209
                }
3210
            }
3211
 
3212
          if (targetm.sched.reorder2
3213
              && (ready.n_ready == 0
3214
                  || !SCHED_GROUP_P (ready_element (&ready, 0))))
3215
            {
3216
              can_issue_more =
3217
                targetm.sched.reorder2 (sched_dump, sched_verbose,
3218
                                        ready.n_ready
3219
                                        ? ready_lastpos (&ready) : NULL,
3220
                                        &ready.n_ready, clock_var);
3221
            }
3222
        }
3223
    }
3224
 
3225
  /* Debug info.  */
3226
  if (sched_verbose)
3227
    {
3228
      fprintf (sched_dump, ";;\tReady list (final):  ");
3229
      debug_ready_list (&ready);
3230
    }
3231
 
3232
  if (current_sched_info->queue_must_finish_empty)
3233
    /* Sanity check -- queue must be empty now.  Meaningless if region has
3234
       multiple bbs.  */
3235
    gcc_assert (!q_size && !ready.n_ready && !ready.n_debug);
3236
  else
3237
    {
3238
      /* We must maintain QUEUE_INDEX between blocks in region.  */
3239
      for (i = ready.n_ready - 1; i >= 0; i--)
3240
        {
3241
          rtx x;
3242
 
3243
          x = ready_element (&ready, i);
3244
          QUEUE_INDEX (x) = QUEUE_NOWHERE;
3245
          TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3246
        }
3247
 
3248
      if (q_size)
3249
        for (i = 0; i <= max_insn_queue_index; i++)
3250
          {
3251
            rtx link;
3252
            for (link = insn_queue[i]; link; link = XEXP (link, 1))
3253
              {
3254
                rtx x;
3255
 
3256
                x = XEXP (link, 0);
3257
                QUEUE_INDEX (x) = QUEUE_NOWHERE;
3258
                TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3259
              }
3260
            free_INSN_LIST_list (&insn_queue[i]);
3261
          }
3262
    }
3263
 
3264
  if (sched_verbose)
3265
    fprintf (sched_dump, ";;   total time = %d\n", clock_var);
3266
 
3267
  if (!current_sched_info->queue_must_finish_empty
3268
      || haifa_recovery_bb_recently_added_p)
3269
    {
3270
      /* INSN_TICK (minimum clock tick at which the insn becomes
3271
         ready) may be not correct for the insn in the subsequent
3272
         blocks of the region.  We should use a correct value of
3273
         `clock_var' or modify INSN_TICK.  It is better to keep
3274
         clock_var value equal to 0 at the start of a basic block.
3275
         Therefore we modify INSN_TICK here.  */
3276
      fix_inter_tick (NEXT_INSN (prev_head), last_scheduled_insn);
3277
    }
3278
 
3279
  if (targetm.sched.md_finish)
3280
    {
3281
      targetm.sched.md_finish (sched_dump, sched_verbose);
3282
      /* Target might have added some instructions to the scheduled block
3283
         in its md_finish () hook.  These new insns don't have any data
3284
         initialized and to identify them we extend h_i_d so that they'll
3285
         get zero luids.  */
3286
      sched_init_luids (NULL, NULL, NULL, NULL);
3287
    }
3288
 
3289
  if (sched_verbose)
3290
    fprintf (sched_dump, ";;   new head = %d\n;;   new tail = %d\n\n",
3291
             INSN_UID (head), INSN_UID (tail));
3292
 
3293
  /* Update head/tail boundaries.  */
3294
  head = NEXT_INSN (prev_head);
3295
  tail = last_scheduled_insn;
3296
 
3297
  head = restore_other_notes (head, NULL);
3298
 
3299
  current_sched_info->head = head;
3300
  current_sched_info->tail = tail;
3301
}
3302
 
3303
/* Set_priorities: compute priority of each insn in the block.  */
3304
 
3305
int
3306
set_priorities (rtx head, rtx tail)
3307
{
3308
  rtx insn;
3309
  int n_insn;
3310
  int sched_max_insns_priority =
3311
        current_sched_info->sched_max_insns_priority;
3312
  rtx prev_head;
3313
 
3314
  if (head == tail && (! INSN_P (head) || BOUNDARY_DEBUG_INSN_P (head)))
3315
    gcc_unreachable ();
3316
 
3317
  n_insn = 0;
3318
 
3319
  prev_head = PREV_INSN (head);
3320
  for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
3321
    {
3322
      if (!INSN_P (insn))
3323
        continue;
3324
 
3325
      n_insn++;
3326
      (void) priority (insn);
3327
 
3328
      gcc_assert (INSN_PRIORITY_KNOWN (insn));
3329
 
3330
      sched_max_insns_priority = MAX (sched_max_insns_priority,
3331
                                      INSN_PRIORITY (insn));
3332
    }
3333
 
3334
  current_sched_info->sched_max_insns_priority = sched_max_insns_priority;
3335
 
3336
  return n_insn;
3337
}
3338
 
3339
/* Set dump and sched_verbose for the desired debugging output.  If no
3340
   dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
3341
   For -fsched-verbose=N, N>=10, print everything to stderr.  */
3342
void
3343
setup_sched_dump (void)
3344
{
3345
  sched_verbose = sched_verbose_param;
3346
  if (sched_verbose_param == 0 && dump_file)
3347
    sched_verbose = 1;
3348
  sched_dump = ((sched_verbose_param >= 10 || !dump_file)
3349
                ? stderr : dump_file);
3350
}
3351
 
3352
/* Initialize some global state for the scheduler.  This function works
3353
   with the common data shared between all the schedulers.  It is called
3354
   from the scheduler specific initialization routine.  */
3355
 
3356
void
3357
sched_init (void)
3358
{
3359
  /* Disable speculative loads in their presence if cc0 defined.  */
3360
#ifdef HAVE_cc0
3361
  flag_schedule_speculative_load = 0;
3362
#endif
3363
 
3364
  sched_pressure_p = (flag_sched_pressure && ! reload_completed
3365
                      && common_sched_info->sched_pass_id == SCHED_RGN_PASS);
3366
  if (sched_pressure_p)
3367
    ira_setup_eliminable_regset ();
3368
 
3369
  /* Initialize SPEC_INFO.  */
3370
  if (targetm.sched.set_sched_flags)
3371
    {
3372
      spec_info = &spec_info_var;
3373
      targetm.sched.set_sched_flags (spec_info);
3374
 
3375
      if (spec_info->mask != 0)
3376
        {
3377
          spec_info->data_weakness_cutoff =
3378
            (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
3379
          spec_info->control_weakness_cutoff =
3380
            (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF)
3381
             * REG_BR_PROB_BASE) / 100;
3382
        }
3383
      else
3384
        /* So we won't read anything accidentally.  */
3385
        spec_info = NULL;
3386
 
3387
    }
3388
  else
3389
    /* So we won't read anything accidentally.  */
3390
    spec_info = 0;
3391
 
3392
  /* Initialize issue_rate.  */
3393
  if (targetm.sched.issue_rate)
3394
    issue_rate = targetm.sched.issue_rate ();
3395
  else
3396
    issue_rate = 1;
3397
 
3398
  if (cached_issue_rate != issue_rate)
3399
    {
3400
      cached_issue_rate = issue_rate;
3401
      /* To invalidate max_lookahead_tries:  */
3402
      cached_first_cycle_multipass_dfa_lookahead = 0;
3403
    }
3404
 
3405
  if (targetm.sched.first_cycle_multipass_dfa_lookahead)
3406
    dfa_lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
3407
  else
3408
    dfa_lookahead = 0;
3409
 
3410
  if (targetm.sched.init_dfa_pre_cycle_insn)
3411
    targetm.sched.init_dfa_pre_cycle_insn ();
3412
 
3413
  if (targetm.sched.init_dfa_post_cycle_insn)
3414
    targetm.sched.init_dfa_post_cycle_insn ();
3415
 
3416
  dfa_start ();
3417
  dfa_state_size = state_size ();
3418
 
3419
  init_alias_analysis ();
3420
 
3421
  df_set_flags (DF_LR_RUN_DCE);
3422
  df_note_add_problem ();
3423
 
3424
  /* More problems needed for interloop dep calculation in SMS.  */
3425
  if (common_sched_info->sched_pass_id == SCHED_SMS_PASS)
3426
    {
3427
      df_rd_add_problem ();
3428
      df_chain_add_problem (DF_DU_CHAIN + DF_UD_CHAIN);
3429
    }
3430
 
3431
  df_analyze ();
3432
 
3433
  /* Do not run DCE after reload, as this can kill nops inserted
3434
     by bundling.  */
3435
  if (reload_completed)
3436
    df_clear_flags (DF_LR_RUN_DCE);
3437
 
3438
  regstat_compute_calls_crossed ();
3439
 
3440
  if (targetm.sched.md_init_global)
3441
    targetm.sched.md_init_global (sched_dump, sched_verbose,
3442
                                  get_max_uid () + 1);
3443
 
3444
  if (sched_pressure_p)
3445
    {
3446
      int i, max_regno = max_reg_num ();
3447
 
3448
      ira_set_pseudo_classes (sched_verbose ? sched_dump : NULL);
3449
      sched_regno_cover_class
3450
        = (enum reg_class *) xmalloc (max_regno * sizeof (enum reg_class));
3451
      for (i = 0; i < max_regno; i++)
3452
        sched_regno_cover_class[i]
3453
          = (i < FIRST_PSEUDO_REGISTER
3454
             ? ira_class_translate[REGNO_REG_CLASS (i)]
3455
             : reg_cover_class (i));
3456
      curr_reg_live = BITMAP_ALLOC (NULL);
3457
      saved_reg_live = BITMAP_ALLOC (NULL);
3458
      region_ref_regs = BITMAP_ALLOC (NULL);
3459
    }
3460
 
3461
  curr_state = xmalloc (dfa_state_size);
3462
}
3463
 
3464
static void haifa_init_only_bb (basic_block, basic_block);
3465
 
3466
/* Initialize data structures specific to the Haifa scheduler.  */
3467
void
3468
haifa_sched_init (void)
3469
{
3470
  setup_sched_dump ();
3471
  sched_init ();
3472
 
3473
  if (spec_info != NULL)
3474
    {
3475
      sched_deps_info->use_deps_list = 1;
3476
      sched_deps_info->generate_spec_deps = 1;
3477
    }
3478
 
3479
  /* Initialize luids, dependency caches, target and h_i_d for the
3480
     whole function.  */
3481
  {
3482
    bb_vec_t bbs = VEC_alloc (basic_block, heap, n_basic_blocks);
3483
    basic_block bb;
3484
 
3485
    sched_init_bbs ();
3486
 
3487
    FOR_EACH_BB (bb)
3488
      VEC_quick_push (basic_block, bbs, bb);
3489
    sched_init_luids (bbs, NULL, NULL, NULL);
3490
    sched_deps_init (true);
3491
    sched_extend_target ();
3492
    haifa_init_h_i_d (bbs, NULL, NULL, NULL);
3493
 
3494
    VEC_free (basic_block, heap, bbs);
3495
  }
3496
 
3497
  sched_init_only_bb = haifa_init_only_bb;
3498
  sched_split_block = sched_split_block_1;
3499
  sched_create_empty_bb = sched_create_empty_bb_1;
3500
  haifa_recovery_bb_ever_added_p = false;
3501
 
3502
#ifdef ENABLE_CHECKING
3503
  /* This is used preferably for finding bugs in check_cfg () itself.
3504
     We must call sched_bbs_init () before check_cfg () because check_cfg ()
3505
     assumes that the last insn in the last bb has a non-null successor.  */
3506
  check_cfg (0, 0);
3507
#endif
3508
 
3509
  nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
3510
  before_recovery = 0;
3511
  after_recovery = 0;
3512
}
3513
 
3514
/* Finish work with the data specific to the Haifa scheduler.  */
3515
void
3516
haifa_sched_finish (void)
3517
{
3518
  sched_create_empty_bb = NULL;
3519
  sched_split_block = NULL;
3520
  sched_init_only_bb = NULL;
3521
 
3522
  if (spec_info && spec_info->dump)
3523
    {
3524
      char c = reload_completed ? 'a' : 'b';
3525
 
3526
      fprintf (spec_info->dump,
3527
               ";; %s:\n", current_function_name ());
3528
 
3529
      fprintf (spec_info->dump,
3530
               ";; Procedure %cr-begin-data-spec motions == %d\n",
3531
               c, nr_begin_data);
3532
      fprintf (spec_info->dump,
3533
               ";; Procedure %cr-be-in-data-spec motions == %d\n",
3534
               c, nr_be_in_data);
3535
      fprintf (spec_info->dump,
3536
               ";; Procedure %cr-begin-control-spec motions == %d\n",
3537
               c, nr_begin_control);
3538
      fprintf (spec_info->dump,
3539
               ";; Procedure %cr-be-in-control-spec motions == %d\n",
3540
               c, nr_be_in_control);
3541
    }
3542
 
3543
  /* Finalize h_i_d, dependency caches, and luids for the whole
3544
     function.  Target will be finalized in md_global_finish ().  */
3545
  sched_deps_finish ();
3546
  sched_finish_luids ();
3547
  current_sched_info = NULL;
3548
  sched_finish ();
3549
}
3550
 
3551
/* Free global data used during insn scheduling.  This function works with
3552
   the common data shared between the schedulers.  */
3553
 
3554
void
3555
sched_finish (void)
3556
{
3557
  haifa_finish_h_i_d ();
3558
  if (sched_pressure_p)
3559
    {
3560
      free (sched_regno_cover_class);
3561
      BITMAP_FREE (region_ref_regs);
3562
      BITMAP_FREE (saved_reg_live);
3563
      BITMAP_FREE (curr_reg_live);
3564
    }
3565
  free (curr_state);
3566
 
3567
  if (targetm.sched.md_finish_global)
3568
    targetm.sched.md_finish_global (sched_dump, sched_verbose);
3569
 
3570
  end_alias_analysis ();
3571
 
3572
  regstat_free_calls_crossed ();
3573
 
3574
  dfa_finish ();
3575
 
3576
#ifdef ENABLE_CHECKING
3577
  /* After reload ia64 backend clobbers CFG, so can't check anything.  */
3578
  if (!reload_completed)
3579
    check_cfg (0, 0);
3580
#endif
3581
}
3582
 
3583
/* Fix INSN_TICKs of the instructions in the current block as well as
3584
   INSN_TICKs of their dependents.
3585
   HEAD and TAIL are the begin and the end of the current scheduled block.  */
3586
static void
3587
fix_inter_tick (rtx head, rtx tail)
3588
{
3589
  /* Set of instructions with corrected INSN_TICK.  */
3590
  bitmap_head processed;
3591
  /* ??? It is doubtful if we should assume that cycle advance happens on
3592
     basic block boundaries.  Basically insns that are unconditionally ready
3593
     on the start of the block are more preferable then those which have
3594
     a one cycle dependency over insn from the previous block.  */
3595
  int next_clock = clock_var + 1;
3596
 
3597
  bitmap_initialize (&processed, 0);
3598
 
3599
  /* Iterates over scheduled instructions and fix their INSN_TICKs and
3600
     INSN_TICKs of dependent instructions, so that INSN_TICKs are consistent
3601
     across different blocks.  */
3602
  for (tail = NEXT_INSN (tail); head != tail; head = NEXT_INSN (head))
3603
    {
3604
      if (INSN_P (head))
3605
        {
3606
          int tick;
3607
          sd_iterator_def sd_it;
3608
          dep_t dep;
3609
 
3610
          tick = INSN_TICK (head);
3611
          gcc_assert (tick >= MIN_TICK);
3612
 
3613
          /* Fix INSN_TICK of instruction from just scheduled block.  */
3614
          if (!bitmap_bit_p (&processed, INSN_LUID (head)))
3615
            {
3616
              bitmap_set_bit (&processed, INSN_LUID (head));
3617
              tick -= next_clock;
3618
 
3619
              if (tick < MIN_TICK)
3620
                tick = MIN_TICK;
3621
 
3622
              INSN_TICK (head) = tick;
3623
            }
3624
 
3625
          FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
3626
            {
3627
              rtx next;
3628
 
3629
              next = DEP_CON (dep);
3630
              tick = INSN_TICK (next);
3631
 
3632
              if (tick != INVALID_TICK
3633
                  /* If NEXT has its INSN_TICK calculated, fix it.
3634
                     If not - it will be properly calculated from
3635
                     scratch later in fix_tick_ready.  */
3636
                  && !bitmap_bit_p (&processed, INSN_LUID (next)))
3637
                {
3638
                  bitmap_set_bit (&processed, INSN_LUID (next));
3639
                  tick -= next_clock;
3640
 
3641
                  if (tick < MIN_TICK)
3642
                    tick = MIN_TICK;
3643
 
3644
                  if (tick > INTER_TICK (next))
3645
                    INTER_TICK (next) = tick;
3646
                  else
3647
                    tick = INTER_TICK (next);
3648
 
3649
                  INSN_TICK (next) = tick;
3650
                }
3651
            }
3652
        }
3653
    }
3654
  bitmap_clear (&processed);
3655
}
3656
 
3657
static int haifa_speculate_insn (rtx, ds_t, rtx *);
3658
 
3659
/* Check if NEXT is ready to be added to the ready or queue list.
3660
   If "yes", add it to the proper list.
3661
   Returns:
3662
      -1 - is not ready yet,
3663
 
3664
 
3665
int
3666
try_ready (rtx next)
3667
{
3668
  ds_t old_ts, *ts;
3669
 
3670
  ts = &TODO_SPEC (next);
3671
  old_ts = *ts;
3672
 
3673
  gcc_assert (!(old_ts & ~(SPECULATIVE | HARD_DEP))
3674
              && ((old_ts & HARD_DEP)
3675
                  || (old_ts & SPECULATIVE)));
3676
 
3677
  if (sd_lists_empty_p (next, SD_LIST_BACK))
3678
    /* NEXT has all its dependencies resolved.  */
3679
    {
3680
      /* Remove HARD_DEP bit from NEXT's status.  */
3681
      *ts &= ~HARD_DEP;
3682
 
3683
      if (current_sched_info->flags & DO_SPECULATION)
3684
        /* Remove all speculative bits from NEXT's status.  */
3685
        *ts &= ~SPECULATIVE;
3686
    }
3687
  else
3688
    {
3689
      /* One of the NEXT's dependencies has been resolved.
3690
         Recalculate NEXT's status.  */
3691
 
3692
      *ts &= ~SPECULATIVE & ~HARD_DEP;
3693
 
3694
      if (sd_lists_empty_p (next, SD_LIST_HARD_BACK))
3695
        /* Now we've got NEXT with speculative deps only.
3696
           1. Look at the deps to see what we have to do.
3697
           2. Check if we can do 'todo'.  */
3698
        {
3699
          sd_iterator_def sd_it;
3700
          dep_t dep;
3701
          bool first_p = true;
3702
 
3703
          FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
3704
            {
3705
              ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
3706
 
3707
              if (DEBUG_INSN_P (DEP_PRO (dep))
3708
                  && !DEBUG_INSN_P (next))
3709
                continue;
3710
 
3711
              if (first_p)
3712
                {
3713
                  first_p = false;
3714
 
3715
                  *ts = ds;
3716
                }
3717
              else
3718
                *ts = ds_merge (*ts, ds);
3719
            }
3720
 
3721
          if (ds_weak (*ts) < spec_info->data_weakness_cutoff)
3722
            /* Too few points.  */
3723
            *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3724
        }
3725
      else
3726
        *ts |= HARD_DEP;
3727
    }
3728
 
3729
  if (*ts & HARD_DEP)
3730
    gcc_assert (*ts == old_ts
3731
                && QUEUE_INDEX (next) == QUEUE_NOWHERE);
3732
  else if (current_sched_info->new_ready)
3733
    *ts = current_sched_info->new_ready (next, *ts);
3734
 
3735
  /* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
3736
     have its original pattern or changed (speculative) one.  This is due
3737
     to changing ebb in region scheduling.
3738
     * But if (old_ts & SPECULATIVE), then we are pretty sure that insn
3739
     has speculative pattern.
3740
 
3741
     We can't assert (!(*ts & HARD_DEP) || *ts == old_ts) here because
3742
     control-speculative NEXT could have been discarded by sched-rgn.c
3743
     (the same case as when discarded by can_schedule_ready_p ()).  */
3744
 
3745
  if ((*ts & SPECULATIVE)
3746
      /* If (old_ts == *ts), then (old_ts & SPECULATIVE) and we don't
3747
         need to change anything.  */
3748
      && *ts != old_ts)
3749
    {
3750
      int res;
3751
      rtx new_pat;
3752
 
3753
      gcc_assert ((*ts & SPECULATIVE) && !(*ts & ~SPECULATIVE));
3754
 
3755
      res = haifa_speculate_insn (next, *ts, &new_pat);
3756
 
3757
      switch (res)
3758
        {
3759
        case -1:
3760
          /* It would be nice to change DEP_STATUS of all dependences,
3761
             which have ((DEP_STATUS & SPECULATIVE) == *ts) to HARD_DEP,
3762
             so we won't reanalyze anything.  */
3763
          *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3764
          break;
3765
 
3766
        case 0:
3767
          /* We follow the rule, that every speculative insn
3768
             has non-null ORIG_PAT.  */
3769
          if (!ORIG_PAT (next))
3770
            ORIG_PAT (next) = PATTERN (next);
3771
          break;
3772
 
3773
        case 1:
3774
          if (!ORIG_PAT (next))
3775
            /* If we gonna to overwrite the original pattern of insn,
3776
               save it.  */
3777
            ORIG_PAT (next) = PATTERN (next);
3778
 
3779
          haifa_change_pattern (next, new_pat);
3780
          break;
3781
 
3782
        default:
3783
          gcc_unreachable ();
3784
        }
3785
    }
3786
 
3787
  /* We need to restore pattern only if (*ts == 0), because otherwise it is
3788
     either correct (*ts & SPECULATIVE),
3789
     or we simply don't care (*ts & HARD_DEP).  */
3790
 
3791
  gcc_assert (!ORIG_PAT (next)
3792
              || !IS_SPECULATION_BRANCHY_CHECK_P (next));
3793
 
3794
  if (*ts & HARD_DEP)
3795
    {
3796
      /* We can't assert (QUEUE_INDEX (next) == QUEUE_NOWHERE) here because
3797
         control-speculative NEXT could have been discarded by sched-rgn.c
3798
         (the same case as when discarded by can_schedule_ready_p ()).  */
3799
      /*gcc_assert (QUEUE_INDEX (next) == QUEUE_NOWHERE);*/
3800
 
3801
      change_queue_index (next, QUEUE_NOWHERE);
3802
      return -1;
3803
    }
3804
  else if (!(*ts & BEGIN_SPEC) && ORIG_PAT (next) && !IS_SPECULATION_CHECK_P (next))
3805
    /* We should change pattern of every previously speculative
3806
       instruction - and we determine if NEXT was speculative by using
3807
       ORIG_PAT field.  Except one case - speculation checks have ORIG_PAT
3808
       pat too, so skip them.  */
3809
    {
3810
      haifa_change_pattern (next, ORIG_PAT (next));
3811
      ORIG_PAT (next) = 0;
3812
    }
3813
 
3814
  if (sched_verbose >= 2)
3815
    {
3816
      int s = TODO_SPEC (next);
3817
 
3818
      fprintf (sched_dump, ";;\t\tdependencies resolved: insn %s",
3819
               (*current_sched_info->print_insn) (next, 0));
3820
 
3821
      if (spec_info && spec_info->dump)
3822
        {
3823
          if (s & BEGIN_DATA)
3824
            fprintf (spec_info->dump, "; data-spec;");
3825
          if (s & BEGIN_CONTROL)
3826
            fprintf (spec_info->dump, "; control-spec;");
3827
          if (s & BE_IN_CONTROL)
3828
            fprintf (spec_info->dump, "; in-control-spec;");
3829
        }
3830
 
3831
      fprintf (sched_dump, "\n");
3832
    }
3833
 
3834
  adjust_priority (next);
3835
 
3836
  return fix_tick_ready (next);
3837
}
3838
 
3839
/* Calculate INSN_TICK of NEXT and add it to either ready or queue list.  */
3840
static int
3841
fix_tick_ready (rtx next)
3842
{
3843
  int tick, delay;
3844
 
3845
  if (!sd_lists_empty_p (next, SD_LIST_RES_BACK))
3846
    {
3847
      int full_p;
3848
      sd_iterator_def sd_it;
3849
      dep_t dep;
3850
 
3851
      tick = INSN_TICK (next);
3852
      /* if tick is not equal to INVALID_TICK, then update
3853
         INSN_TICK of NEXT with the most recent resolved dependence
3854
         cost.  Otherwise, recalculate from scratch.  */
3855
      full_p = (tick == INVALID_TICK);
3856
 
3857
      FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
3858
        {
3859
          rtx pro = DEP_PRO (dep);
3860
          int tick1;
3861
 
3862
          gcc_assert (INSN_TICK (pro) >= MIN_TICK);
3863
 
3864
          tick1 = INSN_TICK (pro) + dep_cost (dep);
3865
          if (tick1 > tick)
3866
            tick = tick1;
3867
 
3868
          if (!full_p)
3869
            break;
3870
        }
3871
    }
3872
  else
3873
    tick = -1;
3874
 
3875
  INSN_TICK (next) = tick;
3876
 
3877
  delay = tick - clock_var;
3878
  if (delay <= 0 || sched_pressure_p)
3879
    delay = QUEUE_READY;
3880
 
3881
  change_queue_index (next, delay);
3882
 
3883
  return delay;
3884
}
3885
 
3886
/* Move NEXT to the proper queue list with (DELAY >= 1),
3887
   or add it to the ready list (DELAY == QUEUE_READY),
3888
   or remove it from ready and queue lists at all (DELAY == QUEUE_NOWHERE).  */
3889
static void
3890
change_queue_index (rtx next, int delay)
3891
{
3892
  int i = QUEUE_INDEX (next);
3893
 
3894
  gcc_assert (QUEUE_NOWHERE <= delay && delay <= max_insn_queue_index
3895
              && delay != 0);
3896
  gcc_assert (i != QUEUE_SCHEDULED);
3897
 
3898
  if ((delay > 0 && NEXT_Q_AFTER (q_ptr, delay) == i)
3899
      || (delay < 0 && delay == i))
3900
    /* We have nothing to do.  */
3901
    return;
3902
 
3903
  /* Remove NEXT from wherever it is now.  */
3904
  if (i == QUEUE_READY)
3905
    ready_remove_insn (next);
3906
  else if (i >= 0)
3907
    queue_remove (next);
3908
 
3909
  /* Add it to the proper place.  */
3910
  if (delay == QUEUE_READY)
3911
    ready_add (readyp, next, false);
3912
  else if (delay >= 1)
3913
    queue_insn (next, delay);
3914
 
3915
  if (sched_verbose >= 2)
3916
    {
3917
      fprintf (sched_dump, ";;\t\ttick updated: insn %s",
3918
               (*current_sched_info->print_insn) (next, 0));
3919
 
3920
      if (delay == QUEUE_READY)
3921
        fprintf (sched_dump, " into ready\n");
3922
      else if (delay >= 1)
3923
        fprintf (sched_dump, " into queue with cost=%d\n", delay);
3924
      else
3925
        fprintf (sched_dump, " removed from ready or queue lists\n");
3926
    }
3927
}
3928
 
3929
static int sched_ready_n_insns = -1;
3930
 
3931
/* Initialize per region data structures.  */
3932
void
3933
sched_extend_ready_list (int new_sched_ready_n_insns)
3934
{
3935
  int i;
3936
 
3937
  if (sched_ready_n_insns == -1)
3938
    /* At the first call we need to initialize one more choice_stack
3939
       entry.  */
3940
    {
3941
      i = 0;
3942
      sched_ready_n_insns = 0;
3943
    }
3944
  else
3945
    i = sched_ready_n_insns + 1;
3946
 
3947
  ready.veclen = new_sched_ready_n_insns + issue_rate;
3948
  ready.vec = XRESIZEVEC (rtx, ready.vec, ready.veclen);
3949
 
3950
  gcc_assert (new_sched_ready_n_insns >= sched_ready_n_insns);
3951
 
3952
  ready_try = (char *) xrecalloc (ready_try, new_sched_ready_n_insns,
3953
                                  sched_ready_n_insns, sizeof (*ready_try));
3954
 
3955
  /* We allocate +1 element to save initial state in the choice_stack[0]
3956
     entry.  */
3957
  choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
3958
                             new_sched_ready_n_insns + 1);
3959
 
3960
  for (; i <= new_sched_ready_n_insns; i++)
3961
    choice_stack[i].state = xmalloc (dfa_state_size);
3962
 
3963
  sched_ready_n_insns = new_sched_ready_n_insns;
3964
}
3965
 
3966
/* Free per region data structures.  */
3967
void
3968
sched_finish_ready_list (void)
3969
{
3970
  int i;
3971
 
3972
  free (ready.vec);
3973
  ready.vec = NULL;
3974
  ready.veclen = 0;
3975
 
3976
  free (ready_try);
3977
  ready_try = NULL;
3978
 
3979
  for (i = 0; i <= sched_ready_n_insns; i++)
3980
    free (choice_stack [i].state);
3981
  free (choice_stack);
3982
  choice_stack = NULL;
3983
 
3984
  sched_ready_n_insns = -1;
3985
}
3986
 
3987
static int
3988
haifa_luid_for_non_insn (rtx x)
3989
{
3990
  gcc_assert (NOTE_P (x) || LABEL_P (x));
3991
 
3992
  return 0;
3993
}
3994
 
3995
/* Generates recovery code for INSN.  */
3996
static void
3997
generate_recovery_code (rtx insn)
3998
{
3999
  if (TODO_SPEC (insn) & BEGIN_SPEC)
4000
    begin_speculative_block (insn);
4001
 
4002
  /* Here we have insn with no dependencies to
4003
     instructions other then CHECK_SPEC ones.  */
4004
 
4005
  if (TODO_SPEC (insn) & BE_IN_SPEC)
4006
    add_to_speculative_block (insn);
4007
}
4008
 
4009
/* Helper function.
4010
   Tries to add speculative dependencies of type FS between instructions
4011
   in deps_list L and TWIN.  */
4012
static void
4013
process_insn_forw_deps_be_in_spec (rtx insn, rtx twin, ds_t fs)
4014
{
4015
  sd_iterator_def sd_it;
4016
  dep_t dep;
4017
 
4018
  FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
4019
    {
4020
      ds_t ds;
4021
      rtx consumer;
4022
 
4023
      consumer = DEP_CON (dep);
4024
 
4025
      ds = DEP_STATUS (dep);
4026
 
4027
      if (/* If we want to create speculative dep.  */
4028
          fs
4029
          /* And we can do that because this is a true dep.  */
4030
          && (ds & DEP_TYPES) == DEP_TRUE)
4031
        {
4032
          gcc_assert (!(ds & BE_IN_SPEC));
4033
 
4034
          if (/* If this dep can be overcome with 'begin speculation'.  */
4035
              ds & BEGIN_SPEC)
4036
            /* Then we have a choice: keep the dep 'begin speculative'
4037
               or transform it into 'be in speculative'.  */
4038
            {
4039
              if (/* In try_ready we assert that if insn once became ready
4040
                     it can be removed from the ready (or queue) list only
4041
                     due to backend decision.  Hence we can't let the
4042
                     probability of the speculative dep to decrease.  */
4043
                  ds_weak (ds) <= ds_weak (fs))
4044
                {
4045
                  ds_t new_ds;
4046
 
4047
                  new_ds = (ds & ~BEGIN_SPEC) | fs;
4048
 
4049
                  if (/* consumer can 'be in speculative'.  */
4050
                      sched_insn_is_legitimate_for_speculation_p (consumer,
4051
                                                                  new_ds))
4052
                    /* Transform it to be in speculative.  */
4053
                    ds = new_ds;
4054
                }
4055
            }
4056
          else
4057
            /* Mark the dep as 'be in speculative'.  */
4058
            ds |= fs;
4059
        }
4060
 
4061
      {
4062
        dep_def _new_dep, *new_dep = &_new_dep;
4063
 
4064
        init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
4065
        sd_add_dep (new_dep, false);
4066
      }
4067
    }
4068
}
4069
 
4070
/* Generates recovery code for BEGIN speculative INSN.  */
4071
static void
4072
begin_speculative_block (rtx insn)
4073
{
4074
  if (TODO_SPEC (insn) & BEGIN_DATA)
4075
    nr_begin_data++;
4076
  if (TODO_SPEC (insn) & BEGIN_CONTROL)
4077
    nr_begin_control++;
4078
 
4079
  create_check_block_twin (insn, false);
4080
 
4081
  TODO_SPEC (insn) &= ~BEGIN_SPEC;
4082
}
4083
 
4084
static void haifa_init_insn (rtx);
4085
 
4086
/* Generates recovery code for BE_IN speculative INSN.  */
4087
static void
4088
add_to_speculative_block (rtx insn)
4089
{
4090
  ds_t ts;
4091
  sd_iterator_def sd_it;
4092
  dep_t dep;
4093
  rtx twins = NULL;
4094
  rtx_vec_t priorities_roots;
4095
 
4096
  ts = TODO_SPEC (insn);
4097
  gcc_assert (!(ts & ~BE_IN_SPEC));
4098
 
4099
  if (ts & BE_IN_DATA)
4100
    nr_be_in_data++;
4101
  if (ts & BE_IN_CONTROL)
4102
    nr_be_in_control++;
4103
 
4104
  TODO_SPEC (insn) &= ~BE_IN_SPEC;
4105
  gcc_assert (!TODO_SPEC (insn));
4106
 
4107
  DONE_SPEC (insn) |= ts;
4108
 
4109
  /* First we convert all simple checks to branchy.  */
4110
  for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4111
       sd_iterator_cond (&sd_it, &dep);)
4112
    {
4113
      rtx check = DEP_PRO (dep);
4114
 
4115
      if (IS_SPECULATION_SIMPLE_CHECK_P (check))
4116
        {
4117
          create_check_block_twin (check, true);
4118
 
4119
          /* Restart search.  */
4120
          sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4121
        }
4122
      else
4123
        /* Continue search.  */
4124
        sd_iterator_next (&sd_it);
4125
    }
4126
 
4127
  priorities_roots = NULL;
4128
  clear_priorities (insn, &priorities_roots);
4129
 
4130
  while (1)
4131
    {
4132
      rtx check, twin;
4133
      basic_block rec;
4134
 
4135
      /* Get the first backward dependency of INSN.  */
4136
      sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4137
      if (!sd_iterator_cond (&sd_it, &dep))
4138
        /* INSN has no backward dependencies left.  */
4139
        break;
4140
 
4141
      gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
4142
                  && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
4143
                  && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4144
 
4145
      check = DEP_PRO (dep);
4146
 
4147
      gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
4148
                  && QUEUE_INDEX (check) == QUEUE_NOWHERE);
4149
 
4150
      rec = BLOCK_FOR_INSN (check);
4151
 
4152
      twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
4153
      haifa_init_insn (twin);
4154
 
4155
      sd_copy_back_deps (twin, insn, true);
4156
 
4157
      if (sched_verbose && spec_info->dump)
4158
        /* INSN_BB (insn) isn't determined for twin insns yet.
4159
           So we can't use current_sched_info->print_insn.  */
4160
        fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4161
                 INSN_UID (twin), rec->index);
4162
 
4163
      twins = alloc_INSN_LIST (twin, twins);
4164
 
4165
      /* Add dependences between TWIN and all appropriate
4166
         instructions from REC.  */
4167
      FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
4168
        {
4169
          rtx pro = DEP_PRO (dep);
4170
 
4171
          gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
4172
 
4173
          /* INSN might have dependencies from the instructions from
4174
             several recovery blocks.  At this iteration we process those
4175
             producers that reside in REC.  */
4176
          if (BLOCK_FOR_INSN (pro) == rec)
4177
            {
4178
              dep_def _new_dep, *new_dep = &_new_dep;
4179
 
4180
              init_dep (new_dep, pro, twin, REG_DEP_TRUE);
4181
              sd_add_dep (new_dep, false);
4182
            }
4183
        }
4184
 
4185
      process_insn_forw_deps_be_in_spec (insn, twin, ts);
4186
 
4187
      /* Remove all dependencies between INSN and insns in REC.  */
4188
      for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4189
           sd_iterator_cond (&sd_it, &dep);)
4190
        {
4191
          rtx pro = DEP_PRO (dep);
4192
 
4193
          if (BLOCK_FOR_INSN (pro) == rec)
4194
            sd_delete_dep (sd_it);
4195
          else
4196
            sd_iterator_next (&sd_it);
4197
        }
4198
    }
4199
 
4200
  /* We couldn't have added the dependencies between INSN and TWINS earlier
4201
     because that would make TWINS appear in the INSN_BACK_DEPS (INSN).  */
4202
  while (twins)
4203
    {
4204
      rtx twin;
4205
 
4206
      twin = XEXP (twins, 0);
4207
 
4208
      {
4209
        dep_def _new_dep, *new_dep = &_new_dep;
4210
 
4211
        init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4212
        sd_add_dep (new_dep, false);
4213
      }
4214
 
4215
      twin = XEXP (twins, 1);
4216
      free_INSN_LIST_node (twins);
4217
      twins = twin;
4218
    }
4219
 
4220
  calc_priorities (priorities_roots);
4221
  VEC_free (rtx, heap, priorities_roots);
4222
}
4223
 
4224
/* Extends and fills with zeros (only the new part) array pointed to by P.  */
4225
void *
4226
xrecalloc (void *p, size_t new_nmemb, size_t old_nmemb, size_t size)
4227
{
4228
  gcc_assert (new_nmemb >= old_nmemb);
4229
  p = XRESIZEVAR (void, p, new_nmemb * size);
4230
  memset (((char *) p) + old_nmemb * size, 0, (new_nmemb - old_nmemb) * size);
4231
  return p;
4232
}
4233
 
4234
/* Helper function.
4235
   Find fallthru edge from PRED.  */
4236
edge
4237
find_fallthru_edge (basic_block pred)
4238
{
4239
  edge e;
4240
  edge_iterator ei;
4241
  basic_block succ;
4242
 
4243
  succ = pred->next_bb;
4244
  gcc_assert (succ->prev_bb == pred);
4245
 
4246
  if (EDGE_COUNT (pred->succs) <= EDGE_COUNT (succ->preds))
4247
    {
4248
      FOR_EACH_EDGE (e, ei, pred->succs)
4249
        if (e->flags & EDGE_FALLTHRU)
4250
          {
4251
            gcc_assert (e->dest == succ);
4252
            return e;
4253
          }
4254
    }
4255
  else
4256
    {
4257
      FOR_EACH_EDGE (e, ei, succ->preds)
4258
        if (e->flags & EDGE_FALLTHRU)
4259
          {
4260
            gcc_assert (e->src == pred);
4261
            return e;
4262
          }
4263
    }
4264
 
4265
  return NULL;
4266
}
4267
 
4268
/* Extend per basic block data structures.  */
4269
static void
4270
sched_extend_bb (void)
4271
{
4272
  rtx insn;
4273
 
4274
  /* The following is done to keep current_sched_info->next_tail non null.  */
4275
  insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
4276
  if (NEXT_INSN (insn) == 0
4277
      || (!NOTE_P (insn)
4278
          && !LABEL_P (insn)
4279
          /* Don't emit a NOTE if it would end up before a BARRIER.  */
4280
          && !BARRIER_P (NEXT_INSN (insn))))
4281
    {
4282
      rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
4283
      /* Make insn appear outside BB.  */
4284
      set_block_for_insn (note, NULL);
4285
      BB_END (EXIT_BLOCK_PTR->prev_bb) = insn;
4286
    }
4287
}
4288
 
4289
/* Init per basic block data structures.  */
4290
void
4291
sched_init_bbs (void)
4292
{
4293
  sched_extend_bb ();
4294
}
4295
 
4296
/* Initialize BEFORE_RECOVERY variable.  */
4297
static void
4298
init_before_recovery (basic_block *before_recovery_ptr)
4299
{
4300
  basic_block last;
4301
  edge e;
4302
 
4303
  last = EXIT_BLOCK_PTR->prev_bb;
4304
  e = find_fallthru_edge (last);
4305
 
4306
  if (e)
4307
    {
4308
      /* We create two basic blocks:
4309
         1. Single instruction block is inserted right after E->SRC
4310
         and has jump to
4311
         2. Empty block right before EXIT_BLOCK.
4312
         Between these two blocks recovery blocks will be emitted.  */
4313
 
4314
      basic_block single, empty;
4315
      rtx x, label;
4316
 
4317
      /* If the fallthrough edge to exit we've found is from the block we've
4318
         created before, don't do anything more.  */
4319
      if (last == after_recovery)
4320
        return;
4321
 
4322
      adding_bb_to_current_region_p = false;
4323
 
4324
      single = sched_create_empty_bb (last);
4325
      empty = sched_create_empty_bb (single);
4326
 
4327
      /* Add new blocks to the root loop.  */
4328
      if (current_loops != NULL)
4329
        {
4330
          add_bb_to_loop (single, VEC_index (loop_p, current_loops->larray, 0));
4331
          add_bb_to_loop (empty, VEC_index (loop_p, current_loops->larray, 0));
4332
        }
4333
 
4334
      single->count = last->count;
4335
      empty->count = last->count;
4336
      single->frequency = last->frequency;
4337
      empty->frequency = last->frequency;
4338
      BB_COPY_PARTITION (single, last);
4339
      BB_COPY_PARTITION (empty, last);
4340
 
4341
      redirect_edge_succ (e, single);
4342
      make_single_succ_edge (single, empty, 0);
4343
      make_single_succ_edge (empty, EXIT_BLOCK_PTR,
4344
                             EDGE_FALLTHRU | EDGE_CAN_FALLTHRU);
4345
 
4346
      label = block_label (empty);
4347
      x = emit_jump_insn_after (gen_jump (label), BB_END (single));
4348
      JUMP_LABEL (x) = label;
4349
      LABEL_NUSES (label)++;
4350
      haifa_init_insn (x);
4351
 
4352
      emit_barrier_after (x);
4353
 
4354
      sched_init_only_bb (empty, NULL);
4355
      sched_init_only_bb (single, NULL);
4356
      sched_extend_bb ();
4357
 
4358
      adding_bb_to_current_region_p = true;
4359
      before_recovery = single;
4360
      after_recovery = empty;
4361
 
4362
      if (before_recovery_ptr)
4363
        *before_recovery_ptr = before_recovery;
4364
 
4365
      if (sched_verbose >= 2 && spec_info->dump)
4366
        fprintf (spec_info->dump,
4367
                 ";;\t\tFixed fallthru to EXIT : %d->>%d->%d->>EXIT\n",
4368
                 last->index, single->index, empty->index);
4369
    }
4370
  else
4371
    before_recovery = last;
4372
}
4373
 
4374
/* Returns new recovery block.  */
4375
basic_block
4376
sched_create_recovery_block (basic_block *before_recovery_ptr)
4377
{
4378
  rtx label;
4379
  rtx barrier;
4380
  basic_block rec;
4381
 
4382
  haifa_recovery_bb_recently_added_p = true;
4383
  haifa_recovery_bb_ever_added_p = true;
4384
 
4385
  init_before_recovery (before_recovery_ptr);
4386
 
4387
  barrier = get_last_bb_insn (before_recovery);
4388
  gcc_assert (BARRIER_P (barrier));
4389
 
4390
  label = emit_label_after (gen_label_rtx (), barrier);
4391
 
4392
  rec = create_basic_block (label, label, before_recovery);
4393
 
4394
  /* A recovery block always ends with an unconditional jump.  */
4395
  emit_barrier_after (BB_END (rec));
4396
 
4397
  if (BB_PARTITION (before_recovery) != BB_UNPARTITIONED)
4398
    BB_SET_PARTITION (rec, BB_COLD_PARTITION);
4399
 
4400
  if (sched_verbose && spec_info->dump)
4401
    fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
4402
             rec->index);
4403
 
4404
  return rec;
4405
}
4406
 
4407
/* Create edges: FIRST_BB -> REC; FIRST_BB -> SECOND_BB; REC -> SECOND_BB
4408
   and emit necessary jumps.  */
4409
void
4410
sched_create_recovery_edges (basic_block first_bb, basic_block rec,
4411
                             basic_block second_bb)
4412
{
4413
  rtx label;
4414
  rtx jump;
4415
  int edge_flags;
4416
 
4417
  /* This is fixing of incoming edge.  */
4418
  /* ??? Which other flags should be specified?  */
4419
  if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
4420
    /* Partition type is the same, if it is "unpartitioned".  */
4421
    edge_flags = EDGE_CROSSING;
4422
  else
4423
    edge_flags = 0;
4424
 
4425
  make_edge (first_bb, rec, edge_flags);
4426
  label = block_label (second_bb);
4427
  jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
4428
  JUMP_LABEL (jump) = label;
4429
  LABEL_NUSES (label)++;
4430
 
4431
  if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
4432
    /* Partition type is the same, if it is "unpartitioned".  */
4433
    {
4434
      /* Rewritten from cfgrtl.c.  */
4435
      if (flag_reorder_blocks_and_partition
4436
          && targetm.have_named_sections)
4437
        {
4438
          /* We don't need the same note for the check because
4439
             any_condjump_p (check) == true.  */
4440
          add_reg_note (jump, REG_CROSSING_JUMP, NULL_RTX);
4441
        }
4442
      edge_flags = EDGE_CROSSING;
4443
    }
4444
  else
4445
    edge_flags = 0;
4446
 
4447
  make_single_succ_edge (rec, second_bb, edge_flags);
4448
}
4449
 
4450
/* This function creates recovery code for INSN.  If MUTATE_P is nonzero,
4451
   INSN is a simple check, that should be converted to branchy one.  */
4452
static void
4453
create_check_block_twin (rtx insn, bool mutate_p)
4454
{
4455
  basic_block rec;
4456
  rtx label, check, twin;
4457
  ds_t fs;
4458
  sd_iterator_def sd_it;
4459
  dep_t dep;
4460
  dep_def _new_dep, *new_dep = &_new_dep;
4461
  ds_t todo_spec;
4462
 
4463
  gcc_assert (ORIG_PAT (insn) != NULL_RTX);
4464
 
4465
  if (!mutate_p)
4466
    todo_spec = TODO_SPEC (insn);
4467
  else
4468
    {
4469
      gcc_assert (IS_SPECULATION_SIMPLE_CHECK_P (insn)
4470
                  && (TODO_SPEC (insn) & SPECULATIVE) == 0);
4471
 
4472
      todo_spec = CHECK_SPEC (insn);
4473
    }
4474
 
4475
  todo_spec &= SPECULATIVE;
4476
 
4477
  /* Create recovery block.  */
4478
  if (mutate_p || targetm.sched.needs_block_p (todo_spec))
4479
    {
4480
      rec = sched_create_recovery_block (NULL);
4481
      label = BB_HEAD (rec);
4482
    }
4483
  else
4484
    {
4485
      rec = EXIT_BLOCK_PTR;
4486
      label = NULL_RTX;
4487
    }
4488
 
4489
  /* Emit CHECK.  */
4490
  check = targetm.sched.gen_spec_check (insn, label, todo_spec);
4491
 
4492
  if (rec != EXIT_BLOCK_PTR)
4493
    {
4494
      /* To have mem_reg alive at the beginning of second_bb,
4495
         we emit check BEFORE insn, so insn after splitting
4496
         insn will be at the beginning of second_bb, which will
4497
         provide us with the correct life information.  */
4498
      check = emit_jump_insn_before (check, insn);
4499
      JUMP_LABEL (check) = label;
4500
      LABEL_NUSES (label)++;
4501
    }
4502
  else
4503
    check = emit_insn_before (check, insn);
4504
 
4505
  /* Extend data structures.  */
4506
  haifa_init_insn (check);
4507
 
4508
  /* CHECK is being added to current region.  Extend ready list.  */
4509
  gcc_assert (sched_ready_n_insns != -1);
4510
  sched_extend_ready_list (sched_ready_n_insns + 1);
4511
 
4512
  if (current_sched_info->add_remove_insn)
4513
    current_sched_info->add_remove_insn (insn, 0);
4514
 
4515
  RECOVERY_BLOCK (check) = rec;
4516
 
4517
  if (sched_verbose && spec_info->dump)
4518
    fprintf (spec_info->dump, ";;\t\tGenerated check insn : %s\n",
4519
             (*current_sched_info->print_insn) (check, 0));
4520
 
4521
  gcc_assert (ORIG_PAT (insn));
4522
 
4523
  /* Initialize TWIN (twin is a duplicate of original instruction
4524
     in the recovery block).  */
4525
  if (rec != EXIT_BLOCK_PTR)
4526
    {
4527
      sd_iterator_def sd_it;
4528
      dep_t dep;
4529
 
4530
      FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
4531
        if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
4532
          {
4533
            struct _dep _dep2, *dep2 = &_dep2;
4534
 
4535
            init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
4536
 
4537
            sd_add_dep (dep2, true);
4538
          }
4539
 
4540
      twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
4541
      haifa_init_insn (twin);
4542
 
4543
      if (sched_verbose && spec_info->dump)
4544
        /* INSN_BB (insn) isn't determined for twin insns yet.
4545
           So we can't use current_sched_info->print_insn.  */
4546
        fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4547
                 INSN_UID (twin), rec->index);
4548
    }
4549
  else
4550
    {
4551
      ORIG_PAT (check) = ORIG_PAT (insn);
4552
      HAS_INTERNAL_DEP (check) = 1;
4553
      twin = check;
4554
      /* ??? We probably should change all OUTPUT dependencies to
4555
         (TRUE | OUTPUT).  */
4556
    }
4557
 
4558
  /* Copy all resolved back dependencies of INSN to TWIN.  This will
4559
     provide correct value for INSN_TICK (TWIN).  */
4560
  sd_copy_back_deps (twin, insn, true);
4561
 
4562
  if (rec != EXIT_BLOCK_PTR)
4563
    /* In case of branchy check, fix CFG.  */
4564
    {
4565
      basic_block first_bb, second_bb;
4566
      rtx jump;
4567
 
4568
      first_bb = BLOCK_FOR_INSN (check);
4569
      second_bb = sched_split_block (first_bb, check);
4570
 
4571
      sched_create_recovery_edges (first_bb, rec, second_bb);
4572
 
4573
      sched_init_only_bb (second_bb, first_bb);
4574
      sched_init_only_bb (rec, EXIT_BLOCK_PTR);
4575
 
4576
      jump = BB_END (rec);
4577
      haifa_init_insn (jump);
4578
    }
4579
 
4580
  /* Move backward dependences from INSN to CHECK and
4581
     move forward dependences from INSN to TWIN.  */
4582
 
4583
  /* First, create dependencies between INSN's producers and CHECK & TWIN.  */
4584
  FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
4585
    {
4586
      rtx pro = DEP_PRO (dep);
4587
      ds_t ds;
4588
 
4589
      /* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
4590
         check --TRUE--> producer  ??? or ANTI ???
4591
         twin  --TRUE--> producer
4592
         twin  --ANTI--> check
4593
 
4594
         If BEGIN_CONTROL: [insn ~~ANTI~~> producer]:
4595
         check --ANTI--> producer
4596
         twin  --ANTI--> producer
4597
         twin  --ANTI--> check
4598
 
4599
         If BE_IN_SPEC: [insn ~~TRUE~~> producer]:
4600
         check ~~TRUE~~> producer
4601
         twin  ~~TRUE~~> producer
4602
         twin  --ANTI--> check  */
4603
 
4604
      ds = DEP_STATUS (dep);
4605
 
4606
      if (ds & BEGIN_SPEC)
4607
        {
4608
          gcc_assert (!mutate_p);
4609
          ds &= ~BEGIN_SPEC;
4610
        }
4611
 
4612
      init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
4613
      sd_add_dep (new_dep, false);
4614
 
4615
      if (rec != EXIT_BLOCK_PTR)
4616
        {
4617
          DEP_CON (new_dep) = twin;
4618
          sd_add_dep (new_dep, false);
4619
        }
4620
    }
4621
 
4622
  /* Second, remove backward dependencies of INSN.  */
4623
  for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4624
       sd_iterator_cond (&sd_it, &dep);)
4625
    {
4626
      if ((DEP_STATUS (dep) & BEGIN_SPEC)
4627
          || mutate_p)
4628
        /* We can delete this dep because we overcome it with
4629
           BEGIN_SPECULATION.  */
4630
        sd_delete_dep (sd_it);
4631
      else
4632
        sd_iterator_next (&sd_it);
4633
    }
4634
 
4635
  /* Future Speculations.  Determine what BE_IN speculations will be like.  */
4636
  fs = 0;
4637
 
4638
  /* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
4639
     here.  */
4640
 
4641
  gcc_assert (!DONE_SPEC (insn));
4642
 
4643
  if (!mutate_p)
4644
    {
4645
      ds_t ts = TODO_SPEC (insn);
4646
 
4647
      DONE_SPEC (insn) = ts & BEGIN_SPEC;
4648
      CHECK_SPEC (check) = ts & BEGIN_SPEC;
4649
 
4650
      /* Luckiness of future speculations solely depends upon initial
4651
         BEGIN speculation.  */
4652
      if (ts & BEGIN_DATA)
4653
        fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
4654
      if (ts & BEGIN_CONTROL)
4655
        fs = set_dep_weak (fs, BE_IN_CONTROL,
4656
                           get_dep_weak (ts, BEGIN_CONTROL));
4657
    }
4658
  else
4659
    CHECK_SPEC (check) = CHECK_SPEC (insn);
4660
 
4661
  /* Future speculations: call the helper.  */
4662
  process_insn_forw_deps_be_in_spec (insn, twin, fs);
4663
 
4664
  if (rec != EXIT_BLOCK_PTR)
4665
    {
4666
      /* Which types of dependencies should we use here is,
4667
         generally, machine-dependent question...  But, for now,
4668
         it is not.  */
4669
 
4670
      if (!mutate_p)
4671
        {
4672
          init_dep (new_dep, insn, check, REG_DEP_TRUE);
4673
          sd_add_dep (new_dep, false);
4674
 
4675
          init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4676
          sd_add_dep (new_dep, false);
4677
        }
4678
      else
4679
        {
4680
          if (spec_info->dump)
4681
            fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
4682
                     (*current_sched_info->print_insn) (insn, 0));
4683
 
4684
          /* Remove all dependencies of the INSN.  */
4685
          {
4686
            sd_it = sd_iterator_start (insn, (SD_LIST_FORW
4687
                                              | SD_LIST_BACK
4688
                                              | SD_LIST_RES_BACK));
4689
            while (sd_iterator_cond (&sd_it, &dep))
4690
              sd_delete_dep (sd_it);
4691
          }
4692
 
4693
          /* If former check (INSN) already was moved to the ready (or queue)
4694
             list, add new check (CHECK) there too.  */
4695
          if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
4696
            try_ready (check);
4697
 
4698
          /* Remove old check from instruction stream and free its
4699
             data.  */
4700
          sched_remove_insn (insn);
4701
        }
4702
 
4703
      init_dep (new_dep, check, twin, REG_DEP_ANTI);
4704
      sd_add_dep (new_dep, false);
4705
    }
4706
  else
4707
    {
4708
      init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
4709
      sd_add_dep (new_dep, false);
4710
    }
4711
 
4712
  if (!mutate_p)
4713
    /* Fix priorities.  If MUTATE_P is nonzero, this is not necessary,
4714
       because it'll be done later in add_to_speculative_block.  */
4715
    {
4716
      rtx_vec_t priorities_roots = NULL;
4717
 
4718
      clear_priorities (twin, &priorities_roots);
4719
      calc_priorities (priorities_roots);
4720
      VEC_free (rtx, heap, priorities_roots);
4721
    }
4722
}
4723
 
4724
/* Removes dependency between instructions in the recovery block REC
4725
   and usual region instructions.  It keeps inner dependences so it
4726
   won't be necessary to recompute them.  */
4727
static void
4728
fix_recovery_deps (basic_block rec)
4729
{
4730
  rtx note, insn, jump, ready_list = 0;
4731
  bitmap_head in_ready;
4732
  rtx link;
4733
 
4734
  bitmap_initialize (&in_ready, 0);
4735
 
4736
  /* NOTE - a basic block note.  */
4737
  note = NEXT_INSN (BB_HEAD (rec));
4738
  gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4739
  insn = BB_END (rec);
4740
  gcc_assert (JUMP_P (insn));
4741
  insn = PREV_INSN (insn);
4742
 
4743
  do
4744
    {
4745
      sd_iterator_def sd_it;
4746
      dep_t dep;
4747
 
4748
      for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
4749
           sd_iterator_cond (&sd_it, &dep);)
4750
        {
4751
          rtx consumer = DEP_CON (dep);
4752
 
4753
          if (BLOCK_FOR_INSN (consumer) != rec)
4754
            {
4755
              sd_delete_dep (sd_it);
4756
 
4757
              if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
4758
                {
4759
                  ready_list = alloc_INSN_LIST (consumer, ready_list);
4760
                  bitmap_set_bit (&in_ready, INSN_LUID (consumer));
4761
                }
4762
            }
4763
          else
4764
            {
4765
              gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4766
 
4767
              sd_iterator_next (&sd_it);
4768
            }
4769
        }
4770
 
4771
      insn = PREV_INSN (insn);
4772
    }
4773
  while (insn != note);
4774
 
4775
  bitmap_clear (&in_ready);
4776
 
4777
  /* Try to add instructions to the ready or queue list.  */
4778
  for (link = ready_list; link; link = XEXP (link, 1))
4779
    try_ready (XEXP (link, 0));
4780
  free_INSN_LIST_list (&ready_list);
4781
 
4782
  /* Fixing jump's dependences.  */
4783
  insn = BB_HEAD (rec);
4784
  jump = BB_END (rec);
4785
 
4786
  gcc_assert (LABEL_P (insn));
4787
  insn = NEXT_INSN (insn);
4788
 
4789
  gcc_assert (NOTE_INSN_BASIC_BLOCK_P (insn));
4790
  add_jump_dependencies (insn, jump);
4791
}
4792
 
4793
/* Change pattern of INSN to NEW_PAT.  */
4794
void
4795
sched_change_pattern (rtx insn, rtx new_pat)
4796
{
4797
  int t;
4798
 
4799
  t = validate_change (insn, &PATTERN (insn), new_pat, 0);
4800
  gcc_assert (t);
4801
  dfa_clear_single_insn_cache (insn);
4802
}
4803
 
4804
/* Change pattern of INSN to NEW_PAT.  Invalidate cached haifa
4805
   instruction data.  */
4806
static void
4807
haifa_change_pattern (rtx insn, rtx new_pat)
4808
{
4809
  sched_change_pattern (insn, new_pat);
4810
 
4811
  /* Invalidate INSN_COST, so it'll be recalculated.  */
4812
  INSN_COST (insn) = -1;
4813
  /* Invalidate INSN_TICK, so it'll be recalculated.  */
4814
  INSN_TICK (insn) = INVALID_TICK;
4815
}
4816
 
4817
/* -1 - can't speculate,
4818
 
4819
   current instruction pattern,
4820
   1 - need to change pattern for *NEW_PAT to be speculative.  */
4821
int
4822
sched_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4823
{
4824
  gcc_assert (current_sched_info->flags & DO_SPECULATION
4825
              && (request & SPECULATIVE)
4826
              && sched_insn_is_legitimate_for_speculation_p (insn, request));
4827
 
4828
  if ((request & spec_info->mask) != request)
4829
    return -1;
4830
 
4831
  if (request & BE_IN_SPEC
4832
      && !(request & BEGIN_SPEC))
4833
    return 0;
4834
 
4835
  return targetm.sched.speculate_insn (insn, request, new_pat);
4836
}
4837
 
4838
static int
4839
haifa_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4840
{
4841
  gcc_assert (sched_deps_info->generate_spec_deps
4842
              && !IS_SPECULATION_CHECK_P (insn));
4843
 
4844
  if (HAS_INTERNAL_DEP (insn)
4845
      || SCHED_GROUP_P (insn))
4846
    return -1;
4847
 
4848
  return sched_speculate_insn (insn, request, new_pat);
4849
}
4850
 
4851
/* Print some information about block BB, which starts with HEAD and
4852
   ends with TAIL, before scheduling it.
4853
   I is zero, if scheduler is about to start with the fresh ebb.  */
4854
static void
4855
dump_new_block_header (int i, basic_block bb, rtx head, rtx tail)
4856
{
4857
  if (!i)
4858
    fprintf (sched_dump,
4859
             ";;   ======================================================\n");
4860
  else
4861
    fprintf (sched_dump,
4862
             ";;   =====================ADVANCING TO=====================\n");
4863
  fprintf (sched_dump,
4864
           ";;   -- basic block %d from %d to %d -- %s reload\n",
4865
           bb->index, INSN_UID (head), INSN_UID (tail),
4866
           (reload_completed ? "after" : "before"));
4867
  fprintf (sched_dump,
4868
           ";;   ======================================================\n");
4869
  fprintf (sched_dump, "\n");
4870
}
4871
 
4872
/* Unlink basic block notes and labels and saves them, so they
4873
   can be easily restored.  We unlink basic block notes in EBB to
4874
   provide back-compatibility with the previous code, as target backends
4875
   assume, that there'll be only instructions between
4876
   current_sched_info->{head and tail}.  We restore these notes as soon
4877
   as we can.
4878
   FIRST (LAST) is the first (last) basic block in the ebb.
4879
   NB: In usual case (FIRST == LAST) nothing is really done.  */
4880
void
4881
unlink_bb_notes (basic_block first, basic_block last)
4882
{
4883
  /* We DON'T unlink basic block notes of the first block in the ebb.  */
4884
  if (first == last)
4885
    return;
4886
 
4887
  bb_header = XNEWVEC (rtx, last_basic_block);
4888
 
4889
  /* Make a sentinel.  */
4890
  if (last->next_bb != EXIT_BLOCK_PTR)
4891
    bb_header[last->next_bb->index] = 0;
4892
 
4893
  first = first->next_bb;
4894
  do
4895
    {
4896
      rtx prev, label, note, next;
4897
 
4898
      label = BB_HEAD (last);
4899
      if (LABEL_P (label))
4900
        note = NEXT_INSN (label);
4901
      else
4902
        note = label;
4903
      gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4904
 
4905
      prev = PREV_INSN (label);
4906
      next = NEXT_INSN (note);
4907
      gcc_assert (prev && next);
4908
 
4909
      NEXT_INSN (prev) = next;
4910
      PREV_INSN (next) = prev;
4911
 
4912
      bb_header[last->index] = label;
4913
 
4914
      if (last == first)
4915
        break;
4916
 
4917
      last = last->prev_bb;
4918
    }
4919
  while (1);
4920
}
4921
 
4922
/* Restore basic block notes.
4923
   FIRST is the first basic block in the ebb.  */
4924
static void
4925
restore_bb_notes (basic_block first)
4926
{
4927
  if (!bb_header)
4928
    return;
4929
 
4930
  /* We DON'T unlink basic block notes of the first block in the ebb.  */
4931
  first = first->next_bb;
4932
  /* Remember: FIRST is actually a second basic block in the ebb.  */
4933
 
4934
  while (first != EXIT_BLOCK_PTR
4935
         && bb_header[first->index])
4936
    {
4937
      rtx prev, label, note, next;
4938
 
4939
      label = bb_header[first->index];
4940
      prev = PREV_INSN (label);
4941
      next = NEXT_INSN (prev);
4942
 
4943
      if (LABEL_P (label))
4944
        note = NEXT_INSN (label);
4945
      else
4946
        note = label;
4947
      gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4948
 
4949
      bb_header[first->index] = 0;
4950
 
4951
      NEXT_INSN (prev) = label;
4952
      NEXT_INSN (note) = next;
4953
      PREV_INSN (next) = note;
4954
 
4955
      first = first->next_bb;
4956
    }
4957
 
4958
  free (bb_header);
4959
  bb_header = 0;
4960
}
4961
 
4962
/* Helper function.
4963
   Fix CFG after both in- and inter-block movement of
4964
   control_flow_insn_p JUMP.  */
4965
static void
4966
fix_jump_move (rtx jump)
4967
{
4968
  basic_block bb, jump_bb, jump_bb_next;
4969
 
4970
  bb = BLOCK_FOR_INSN (PREV_INSN (jump));
4971
  jump_bb = BLOCK_FOR_INSN (jump);
4972
  jump_bb_next = jump_bb->next_bb;
4973
 
4974
  gcc_assert (common_sched_info->sched_pass_id == SCHED_EBB_PASS
4975
              || IS_SPECULATION_BRANCHY_CHECK_P (jump));
4976
 
4977
  if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
4978
    /* if jump_bb_next is not empty.  */
4979
    BB_END (jump_bb) = BB_END (jump_bb_next);
4980
 
4981
  if (BB_END (bb) != PREV_INSN (jump))
4982
    /* Then there are instruction after jump that should be placed
4983
       to jump_bb_next.  */
4984
    BB_END (jump_bb_next) = BB_END (bb);
4985
  else
4986
    /* Otherwise jump_bb_next is empty.  */
4987
    BB_END (jump_bb_next) = NEXT_INSN (BB_HEAD (jump_bb_next));
4988
 
4989
  /* To make assertion in move_insn happy.  */
4990
  BB_END (bb) = PREV_INSN (jump);
4991
 
4992
  update_bb_for_insn (jump_bb_next);
4993
}
4994
 
4995
/* Fix CFG after interblock movement of control_flow_insn_p JUMP.  */
4996
static void
4997
move_block_after_check (rtx jump)
4998
{
4999
  basic_block bb, jump_bb, jump_bb_next;
5000
  VEC(edge,gc) *t;
5001
 
5002
  bb = BLOCK_FOR_INSN (PREV_INSN (jump));
5003
  jump_bb = BLOCK_FOR_INSN (jump);
5004
  jump_bb_next = jump_bb->next_bb;
5005
 
5006
  update_bb_for_insn (jump_bb);
5007
 
5008
  gcc_assert (IS_SPECULATION_CHECK_P (jump)
5009
              || IS_SPECULATION_CHECK_P (BB_END (jump_bb_next)));
5010
 
5011
  unlink_block (jump_bb_next);
5012
  link_block (jump_bb_next, bb);
5013
 
5014
  t = bb->succs;
5015
  bb->succs = 0;
5016
  move_succs (&(jump_bb->succs), bb);
5017
  move_succs (&(jump_bb_next->succs), jump_bb);
5018
  move_succs (&t, jump_bb_next);
5019
 
5020
  df_mark_solutions_dirty ();
5021
 
5022
  common_sched_info->fix_recovery_cfg
5023
    (bb->index, jump_bb->index, jump_bb_next->index);
5024
}
5025
 
5026
/* Helper function for move_block_after_check.
5027
   This functions attaches edge vector pointed to by SUCCSP to
5028
   block TO.  */
5029
static void
5030
move_succs (VEC(edge,gc) **succsp, basic_block to)
5031
{
5032
  edge e;
5033
  edge_iterator ei;
5034
 
5035
  gcc_assert (to->succs == 0);
5036
 
5037
  to->succs = *succsp;
5038
 
5039
  FOR_EACH_EDGE (e, ei, to->succs)
5040
    e->src = to;
5041
 
5042
  *succsp = 0;
5043
}
5044
 
5045
/* Remove INSN from the instruction stream.
5046
   INSN should have any dependencies.  */
5047
static void
5048
sched_remove_insn (rtx insn)
5049
{
5050
  sd_finish_insn (insn);
5051
 
5052
  change_queue_index (insn, QUEUE_NOWHERE);
5053
  current_sched_info->add_remove_insn (insn, 1);
5054
  remove_insn (insn);
5055
}
5056
 
5057
/* Clear priorities of all instructions, that are forward dependent on INSN.
5058
   Store in vector pointed to by ROOTS_PTR insns on which priority () should
5059
   be invoked to initialize all cleared priorities.  */
5060
static void
5061
clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
5062
{
5063
  sd_iterator_def sd_it;
5064
  dep_t dep;
5065
  bool insn_is_root_p = true;
5066
 
5067
  gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
5068
 
5069
  FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
5070
    {
5071
      rtx pro = DEP_PRO (dep);
5072
 
5073
      if (INSN_PRIORITY_STATUS (pro) >= 0
5074
          && QUEUE_INDEX (insn) != QUEUE_SCHEDULED)
5075
        {
5076
          /* If DEP doesn't contribute to priority then INSN itself should
5077
             be added to priority roots.  */
5078
          if (contributes_to_priority_p (dep))
5079
            insn_is_root_p = false;
5080
 
5081
          INSN_PRIORITY_STATUS (pro) = -1;
5082
          clear_priorities (pro, roots_ptr);
5083
        }
5084
    }
5085
 
5086
  if (insn_is_root_p)
5087
    VEC_safe_push (rtx, heap, *roots_ptr, insn);
5088
}
5089
 
5090
/* Recompute priorities of instructions, whose priorities might have been
5091
   changed.  ROOTS is a vector of instructions whose priority computation will
5092
   trigger initialization of all cleared priorities.  */
5093
static void
5094
calc_priorities (rtx_vec_t roots)
5095
{
5096
  int i;
5097
  rtx insn;
5098
 
5099
  for (i = 0; VEC_iterate (rtx, roots, i, insn); i++)
5100
    priority (insn);
5101
}
5102
 
5103
 
5104
/* Add dependences between JUMP and other instructions in the recovery
5105
   block.  INSN is the first insn the recovery block.  */
5106
static void
5107
add_jump_dependencies (rtx insn, rtx jump)
5108
{
5109
  do
5110
    {
5111
      insn = NEXT_INSN (insn);
5112
      if (insn == jump)
5113
        break;
5114
 
5115
      if (dep_list_size (insn) == 0)
5116
        {
5117
          dep_def _new_dep, *new_dep = &_new_dep;
5118
 
5119
          init_dep (new_dep, insn, jump, REG_DEP_ANTI);
5120
          sd_add_dep (new_dep, false);
5121
        }
5122
    }
5123
  while (1);
5124
 
5125
  gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
5126
}
5127
 
5128
/* Return the NOTE_INSN_BASIC_BLOCK of BB.  */
5129
rtx
5130
bb_note (basic_block bb)
5131
{
5132
  rtx note;
5133
 
5134
  note = BB_HEAD (bb);
5135
  if (LABEL_P (note))
5136
    note = NEXT_INSN (note);
5137
 
5138
  gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
5139
  return note;
5140
}
5141
 
5142
#ifdef ENABLE_CHECKING
5143
/* Helper function for check_cfg.
5144
   Return nonzero, if edge vector pointed to by EL has edge with TYPE in
5145
   its flags.  */
5146
static int
5147
has_edge_p (VEC(edge,gc) *el, int type)
5148
{
5149
  edge e;
5150
  edge_iterator ei;
5151
 
5152
  FOR_EACH_EDGE (e, ei, el)
5153
    if (e->flags & type)
5154
      return 1;
5155
  return 0;
5156
}
5157
 
5158
/* Search back, starting at INSN, for an insn that is not a
5159
   NOTE_INSN_VAR_LOCATION.  Don't search beyond HEAD, and return it if
5160
   no such insn can be found.  */
5161
static inline rtx
5162
prev_non_location_insn (rtx insn, rtx head)
5163
{
5164
  while (insn != head && NOTE_P (insn)
5165
         && NOTE_KIND (insn) == NOTE_INSN_VAR_LOCATION)
5166
    insn = PREV_INSN (insn);
5167
 
5168
  return insn;
5169
}
5170
 
5171
/* Check few properties of CFG between HEAD and TAIL.
5172
   If HEAD (TAIL) is NULL check from the beginning (till the end) of the
5173
   instruction stream.  */
5174
static void
5175
check_cfg (rtx head, rtx tail)
5176
{
5177
  rtx next_tail;
5178
  basic_block bb = 0;
5179
  int not_first = 0, not_last;
5180
 
5181
  if (head == NULL)
5182
    head = get_insns ();
5183
  if (tail == NULL)
5184
    tail = get_last_insn ();
5185
  next_tail = NEXT_INSN (tail);
5186
 
5187
  do
5188
    {
5189
      not_last = head != tail;
5190
 
5191
      if (not_first)
5192
        gcc_assert (NEXT_INSN (PREV_INSN (head)) == head);
5193
      if (not_last)
5194
        gcc_assert (PREV_INSN (NEXT_INSN (head)) == head);
5195
 
5196
      if (LABEL_P (head)
5197
          || (NOTE_INSN_BASIC_BLOCK_P (head)
5198
              && (!not_first
5199
                  || (not_first && !LABEL_P (PREV_INSN (head))))))
5200
        {
5201
          gcc_assert (bb == 0);
5202
          bb = BLOCK_FOR_INSN (head);
5203
          if (bb != 0)
5204
            gcc_assert (BB_HEAD (bb) == head);
5205
          else
5206
            /* This is the case of jump table.  See inside_basic_block_p ().  */
5207
            gcc_assert (LABEL_P (head) && !inside_basic_block_p (head));
5208
        }
5209
 
5210
      if (bb == 0)
5211
        {
5212
          gcc_assert (!inside_basic_block_p (head));
5213
          head = NEXT_INSN (head);
5214
        }
5215
      else
5216
        {
5217
          gcc_assert (inside_basic_block_p (head)
5218
                      || NOTE_P (head));
5219
          gcc_assert (BLOCK_FOR_INSN (head) == bb);
5220
 
5221
          if (LABEL_P (head))
5222
            {
5223
              head = NEXT_INSN (head);
5224
              gcc_assert (NOTE_INSN_BASIC_BLOCK_P (head));
5225
            }
5226
          else
5227
            {
5228
              if (control_flow_insn_p (head))
5229
                {
5230
                  gcc_assert (prev_non_location_insn (BB_END (bb), head)
5231
                              == head);
5232
 
5233
                  if (any_uncondjump_p (head))
5234
                    gcc_assert (EDGE_COUNT (bb->succs) == 1
5235
                                && BARRIER_P (NEXT_INSN (head)));
5236
                  else if (any_condjump_p (head))
5237
                    gcc_assert (/* Usual case.  */
5238
                                (EDGE_COUNT (bb->succs) > 1
5239
                                 && !BARRIER_P (NEXT_INSN (head)))
5240
                                /* Or jump to the next instruction.  */
5241
                                || (EDGE_COUNT (bb->succs) == 1
5242
                                    && (BB_HEAD (EDGE_I (bb->succs, 0)->dest)
5243
                                        == JUMP_LABEL (head))));
5244
                }
5245
              if (BB_END (bb) == head)
5246
                {
5247
                  if (EDGE_COUNT (bb->succs) > 1)
5248
                    gcc_assert (control_flow_insn_p (prev_non_location_insn
5249
                                                     (head, BB_HEAD (bb)))
5250
                                || has_edge_p (bb->succs, EDGE_COMPLEX));
5251
                  bb = 0;
5252
                }
5253
 
5254
              head = NEXT_INSN (head);
5255
            }
5256
        }
5257
 
5258
      not_first = 1;
5259
    }
5260
  while (head != next_tail);
5261
 
5262
  gcc_assert (bb == 0);
5263
}
5264
 
5265
#endif /* ENABLE_CHECKING */
5266
 
5267
/* Extend per basic block data structures.  */
5268
static void
5269
extend_bb (void)
5270
{
5271
  if (sched_scan_info->extend_bb)
5272
    sched_scan_info->extend_bb ();
5273
}
5274
 
5275
/* Init data for BB.  */
5276
static void
5277
init_bb (basic_block bb)
5278
{
5279
  if (sched_scan_info->init_bb)
5280
    sched_scan_info->init_bb (bb);
5281
}
5282
 
5283
/* Extend per insn data structures.  */
5284
static void
5285
extend_insn (void)
5286
{
5287
  if (sched_scan_info->extend_insn)
5288
    sched_scan_info->extend_insn ();
5289
}
5290
 
5291
/* Init data structures for INSN.  */
5292
static void
5293
init_insn (rtx insn)
5294
{
5295
  if (sched_scan_info->init_insn)
5296
    sched_scan_info->init_insn (insn);
5297
}
5298
 
5299
/* Init all insns in BB.  */
5300
static void
5301
init_insns_in_bb (basic_block bb)
5302
{
5303
  rtx insn;
5304
 
5305
  FOR_BB_INSNS (bb, insn)
5306
    init_insn (insn);
5307
}
5308
 
5309
/* A driver function to add a set of basic blocks (BBS),
5310
   a single basic block (BB), a set of insns (INSNS) or a single insn (INSN)
5311
   to the scheduling region.  */
5312
void
5313
sched_scan (const struct sched_scan_info_def *ssi,
5314
            bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5315
{
5316
  sched_scan_info = ssi;
5317
 
5318
  if (bbs != NULL || bb != NULL)
5319
    {
5320
      extend_bb ();
5321
 
5322
      if (bbs != NULL)
5323
        {
5324
          unsigned i;
5325
          basic_block x;
5326
 
5327
          for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
5328
            init_bb (x);
5329
        }
5330
 
5331
      if (bb != NULL)
5332
        init_bb (bb);
5333
    }
5334
 
5335
  extend_insn ();
5336
 
5337
  if (bbs != NULL)
5338
    {
5339
      unsigned i;
5340
      basic_block x;
5341
 
5342
      for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
5343
        init_insns_in_bb (x);
5344
    }
5345
 
5346
  if (bb != NULL)
5347
    init_insns_in_bb (bb);
5348
 
5349
  if (insns != NULL)
5350
    {
5351
      unsigned i;
5352
      rtx x;
5353
 
5354
      for (i = 0; VEC_iterate (rtx, insns, i, x); i++)
5355
        init_insn (x);
5356
    }
5357
 
5358
  if (insn != NULL)
5359
    init_insn (insn);
5360
}
5361
 
5362
 
5363
/* Extend data structures for logical insn UID.  */
5364
static void
5365
luids_extend_insn (void)
5366
{
5367
  int new_luids_max_uid = get_max_uid () + 1;
5368
 
5369
  VEC_safe_grow_cleared (int, heap, sched_luids, new_luids_max_uid);
5370
}
5371
 
5372
/* Initialize LUID for INSN.  */
5373
static void
5374
luids_init_insn (rtx insn)
5375
{
5376
  int i = INSN_P (insn) ? 1 : common_sched_info->luid_for_non_insn (insn);
5377
  int luid;
5378
 
5379
  if (i >= 0)
5380
    {
5381
      luid = sched_max_luid;
5382
      sched_max_luid += i;
5383
    }
5384
  else
5385
    luid = -1;
5386
 
5387
  SET_INSN_LUID (insn, luid);
5388
}
5389
 
5390
/* Initialize luids for BBS, BB, INSNS and INSN.
5391
   The hook common_sched_info->luid_for_non_insn () is used to determine
5392
   if notes, labels, etc. need luids.  */
5393
void
5394
sched_init_luids (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5395
{
5396
  const struct sched_scan_info_def ssi =
5397
    {
5398
      NULL, /* extend_bb */
5399
      NULL, /* init_bb */
5400
      luids_extend_insn, /* extend_insn */
5401
      luids_init_insn /* init_insn */
5402
    };
5403
 
5404
  sched_scan (&ssi, bbs, bb, insns, insn);
5405
}
5406
 
5407
/* Free LUIDs.  */
5408
void
5409
sched_finish_luids (void)
5410
{
5411
  VEC_free (int, heap, sched_luids);
5412
  sched_max_luid = 1;
5413
}
5414
 
5415
/* Return logical uid of INSN.  Helpful while debugging.  */
5416
int
5417
insn_luid (rtx insn)
5418
{
5419
  return INSN_LUID (insn);
5420
}
5421
 
5422
/* Extend per insn data in the target.  */
5423
void
5424
sched_extend_target (void)
5425
{
5426
  if (targetm.sched.h_i_d_extended)
5427
    targetm.sched.h_i_d_extended ();
5428
}
5429
 
5430
/* Extend global scheduler structures (those, that live across calls to
5431
   schedule_block) to include information about just emitted INSN.  */
5432
static void
5433
extend_h_i_d (void)
5434
{
5435
  int reserve = (get_max_uid () + 1
5436
                 - VEC_length (haifa_insn_data_def, h_i_d));
5437
  if (reserve > 0
5438
      && ! VEC_space (haifa_insn_data_def, h_i_d, reserve))
5439
    {
5440
      VEC_safe_grow_cleared (haifa_insn_data_def, heap, h_i_d,
5441
                             3 * get_max_uid () / 2);
5442
      sched_extend_target ();
5443
    }
5444
}
5445
 
5446
/* Initialize h_i_d entry of the INSN with default values.
5447
   Values, that are not explicitly initialized here, hold zero.  */
5448
static void
5449
init_h_i_d (rtx insn)
5450
{
5451
  if (INSN_LUID (insn) > 0)
5452
    {
5453
      INSN_COST (insn) = -1;
5454
      QUEUE_INDEX (insn) = QUEUE_NOWHERE;
5455
      INSN_TICK (insn) = INVALID_TICK;
5456
      INTER_TICK (insn) = INVALID_TICK;
5457
      TODO_SPEC (insn) = HARD_DEP;
5458
    }
5459
}
5460
 
5461
/* Initialize haifa_insn_data for BBS, BB, INSNS and INSN.  */
5462
void
5463
haifa_init_h_i_d (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5464
{
5465
  const struct sched_scan_info_def ssi =
5466
    {
5467
      NULL, /* extend_bb */
5468
      NULL, /* init_bb */
5469
      extend_h_i_d, /* extend_insn */
5470
      init_h_i_d /* init_insn */
5471
    };
5472
 
5473
  sched_scan (&ssi, bbs, bb, insns, insn);
5474
}
5475
 
5476
/* Finalize haifa_insn_data.  */
5477
void
5478
haifa_finish_h_i_d (void)
5479
{
5480
  int i;
5481
  haifa_insn_data_t data;
5482
  struct reg_use_data *use, *next;
5483
 
5484
  for (i = 0; VEC_iterate (haifa_insn_data_def, h_i_d, i, data); i++)
5485
    {
5486
      if (data->reg_pressure != NULL)
5487
        free (data->reg_pressure);
5488
      for (use = data->reg_use_list; use != NULL; use = next)
5489
        {
5490
          next = use->next_insn_use;
5491
          free (use);
5492
        }
5493
    }
5494
  VEC_free (haifa_insn_data_def, heap, h_i_d);
5495
}
5496
 
5497
/* Init data for the new insn INSN.  */
5498
static void
5499
haifa_init_insn (rtx insn)
5500
{
5501
  gcc_assert (insn != NULL);
5502
 
5503
  sched_init_luids (NULL, NULL, NULL, insn);
5504
  sched_extend_target ();
5505
  sched_deps_init (false);
5506
  haifa_init_h_i_d (NULL, NULL, NULL, insn);
5507
 
5508
  if (adding_bb_to_current_region_p)
5509
    {
5510
      sd_init_insn (insn);
5511
 
5512
      /* Extend dependency caches by one element.  */
5513
      extend_dependency_caches (1, false);
5514
    }
5515
}
5516
 
5517
/* Init data for the new basic block BB which comes after AFTER.  */
5518
static void
5519
haifa_init_only_bb (basic_block bb, basic_block after)
5520
{
5521
  gcc_assert (bb != NULL);
5522
 
5523
  sched_init_bbs ();
5524
 
5525
  if (common_sched_info->add_block)
5526
    /* This changes only data structures of the front-end.  */
5527
    common_sched_info->add_block (bb, after);
5528
}
5529
 
5530
/* A generic version of sched_split_block ().  */
5531
basic_block
5532
sched_split_block_1 (basic_block first_bb, rtx after)
5533
{
5534
  edge e;
5535
 
5536
  e = split_block (first_bb, after);
5537
  gcc_assert (e->src == first_bb);
5538
 
5539
  /* sched_split_block emits note if *check == BB_END.  Probably it
5540
     is better to rip that note off.  */
5541
 
5542
  return e->dest;
5543
}
5544
 
5545
/* A generic version of sched_create_empty_bb ().  */
5546
basic_block
5547
sched_create_empty_bb_1 (basic_block after)
5548
{
5549
  return create_empty_bb (after);
5550
}
5551
 
5552
/* Insert PAT as an INSN into the schedule and update the necessary data
5553
   structures to account for it. */
5554
rtx
5555
sched_emit_insn (rtx pat)
5556
{
5557
  rtx insn = emit_insn_after (pat, last_scheduled_insn);
5558
  last_scheduled_insn = insn;
5559
  haifa_init_insn (insn);
5560
  return insn;
5561
}
5562
 
5563
#endif /* INSN_SCHEDULING */

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.