OpenCores
URL https://opencores.org/ocsvn/openrisc_me/openrisc_me/trunk

Subversion Repositories openrisc_me

[/] [openrisc/] [trunk/] [gnu-src/] [gcc-4.5.1/] [gcc/] [haifa-sched.c] - Blame information for rev 325

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 280 jeremybenn
/* Instruction scheduling pass.
2
   Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
3
   2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010
4
   Free Software Foundation, Inc.
5
   Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
6
   and currently maintained by, Jim Wilson (wilson@cygnus.com)
7
 
8
This file is part of GCC.
9
 
10
GCC is free software; you can redistribute it and/or modify it under
11
the terms of the GNU General Public License as published by the Free
12
Software Foundation; either version 3, or (at your option) any later
13
version.
14
 
15
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
16
WARRANTY; without even the implied warranty of MERCHANTABILITY or
17
FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
18
for more details.
19
 
20
You should have received a copy of the GNU General Public License
21
along with GCC; see the file COPYING3.  If not see
22
<http://www.gnu.org/licenses/>.  */
23
 
24
/* Instruction scheduling pass.  This file, along with sched-deps.c,
25
   contains the generic parts.  The actual entry point is found for
26
   the normal instruction scheduling pass is found in sched-rgn.c.
27
 
28
   We compute insn priorities based on data dependencies.  Flow
29
   analysis only creates a fraction of the data-dependencies we must
30
   observe: namely, only those dependencies which the combiner can be
31
   expected to use.  For this pass, we must therefore create the
32
   remaining dependencies we need to observe: register dependencies,
33
   memory dependencies, dependencies to keep function calls in order,
34
   and the dependence between a conditional branch and the setting of
35
   condition codes are all dealt with here.
36
 
37
   The scheduler first traverses the data flow graph, starting with
38
   the last instruction, and proceeding to the first, assigning values
39
   to insn_priority as it goes.  This sorts the instructions
40
   topologically by data dependence.
41
 
42
   Once priorities have been established, we order the insns using
43
   list scheduling.  This works as follows: starting with a list of
44
   all the ready insns, and sorted according to priority number, we
45
   schedule the insn from the end of the list by placing its
46
   predecessors in the list according to their priority order.  We
47
   consider this insn scheduled by setting the pointer to the "end" of
48
   the list to point to the previous insn.  When an insn has no
49
   predecessors, we either queue it until sufficient time has elapsed
50
   or add it to the ready list.  As the instructions are scheduled or
51
   when stalls are introduced, the queue advances and dumps insns into
52
   the ready list.  When all insns down to the lowest priority have
53
   been scheduled, the critical path of the basic block has been made
54
   as short as possible.  The remaining insns are then scheduled in
55
   remaining slots.
56
 
57
   The following list shows the order in which we want to break ties
58
   among insns in the ready list:
59
 
60
   1.  choose insn with the longest path to end of bb, ties
61
   broken by
62
   2.  choose insn with least contribution to register pressure,
63
   ties broken by
64
   3.  prefer in-block upon interblock motion, ties broken by
65
   4.  prefer useful upon speculative motion, ties broken by
66
   5.  choose insn with largest control flow probability, ties
67
   broken by
68
   6.  choose insn with the least dependences upon the previously
69
   scheduled insn, or finally
70
   7   choose the insn which has the most insns dependent on it.
71
   8.  choose insn with lowest UID.
72
 
73
   Memory references complicate matters.  Only if we can be certain
74
   that memory references are not part of the data dependency graph
75
   (via true, anti, or output dependence), can we move operations past
76
   memory references.  To first approximation, reads can be done
77
   independently, while writes introduce dependencies.  Better
78
   approximations will yield fewer dependencies.
79
 
80
   Before reload, an extended analysis of interblock data dependences
81
   is required for interblock scheduling.  This is performed in
82
   compute_block_backward_dependences ().
83
 
84
   Dependencies set up by memory references are treated in exactly the
85
   same way as other dependencies, by using insn backward dependences
86
   INSN_BACK_DEPS.  INSN_BACK_DEPS are translated into forward dependences
87
   INSN_FORW_DEPS the purpose of forward list scheduling.
88
 
89
   Having optimized the critical path, we may have also unduly
90
   extended the lifetimes of some registers.  If an operation requires
91
   that constants be loaded into registers, it is certainly desirable
92
   to load those constants as early as necessary, but no earlier.
93
   I.e., it will not do to load up a bunch of registers at the
94
   beginning of a basic block only to use them at the end, if they
95
   could be loaded later, since this may result in excessive register
96
   utilization.
97
 
98
   Note that since branches are never in basic blocks, but only end
99
   basic blocks, this pass will not move branches.  But that is ok,
100
   since we can use GNU's delayed branch scheduling pass to take care
101
   of this case.
102
 
103
   Also note that no further optimizations based on algebraic
104
   identities are performed, so this pass would be a good one to
105
   perform instruction splitting, such as breaking up a multiply
106
   instruction into shifts and adds where that is profitable.
107
 
108
   Given the memory aliasing analysis that this pass should perform,
109
   it should be possible to remove redundant stores to memory, and to
110
   load values from registers instead of hitting memory.
111
 
112
   Before reload, speculative insns are moved only if a 'proof' exists
113
   that no exception will be caused by this, and if no live registers
114
   exist that inhibit the motion (live registers constraints are not
115
   represented by data dependence edges).
116
 
117
   This pass must update information that subsequent passes expect to
118
   be correct.  Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119
   reg_n_calls_crossed, and reg_live_length.  Also, BB_HEAD, BB_END.
120
 
121
   The information in the line number notes is carefully retained by
122
   this pass.  Notes that refer to the starting and ending of
123
   exception regions are also carefully retained by this pass.  All
124
   other NOTE insns are grouped in their same relative order at the
125
   beginning of basic blocks and regions that have been scheduled.  */
126
 
127
#include "config.h"
128
#include "system.h"
129
#include "coretypes.h"
130
#include "tm.h"
131
#include "toplev.h"
132
#include "rtl.h"
133
#include "tm_p.h"
134
#include "hard-reg-set.h"
135
#include "regs.h"
136
#include "function.h"
137
#include "flags.h"
138
#include "insn-config.h"
139
#include "insn-attr.h"
140
#include "except.h"
141
#include "toplev.h"
142
#include "recog.h"
143
#include "sched-int.h"
144
#include "target.h"
145
#include "output.h"
146
#include "params.h"
147
#include "vecprim.h"
148
#include "dbgcnt.h"
149
#include "cfgloop.h"
150
#include "ira.h"
151
 
152
#ifdef INSN_SCHEDULING
153
 
154
/* issue_rate is the number of insns that can be scheduled in the same
155
   machine cycle.  It can be defined in the config/mach/mach.h file,
156
   otherwise we set it to 1.  */
157
 
158
int issue_rate;
159
 
160
/* sched-verbose controls the amount of debugging output the
161
   scheduler prints.  It is controlled by -fsched-verbose=N:
162
   N>0 and no -DSR : the output is directed to stderr.
163
   N>=10 will direct the printouts to stderr (regardless of -dSR).
164
   N=1: same as -dSR.
165
   N=2: bb's probabilities, detailed ready list info, unit/insn info.
166
   N=3: rtl at abort point, control-flow, regions info.
167
   N=5: dependences info.  */
168
 
169
static int sched_verbose_param = 0;
170
int sched_verbose = 0;
171
 
172
/* Debugging file.  All printouts are sent to dump, which is always set,
173
   either to stderr, or to the dump listing file (-dRS).  */
174
FILE *sched_dump = 0;
175
 
176
/* fix_sched_param() is called from toplev.c upon detection
177
   of the -fsched-verbose=N option.  */
178
 
179
void
180
fix_sched_param (const char *param, const char *val)
181
{
182
  if (!strcmp (param, "verbose"))
183
    sched_verbose_param = atoi (val);
184
  else
185
    warning (0, "fix_sched_param: unknown param: %s", param);
186
}
187
 
188
/* This is a placeholder for the scheduler parameters common
189
   to all schedulers.  */
190
struct common_sched_info_def *common_sched_info;
191
 
192
#define INSN_TICK(INSN) (HID (INSN)->tick)
193
#define INTER_TICK(INSN) (HID (INSN)->inter_tick)
194
 
195
/* If INSN_TICK of an instruction is equal to INVALID_TICK,
196
   then it should be recalculated from scratch.  */
197
#define INVALID_TICK (-(max_insn_queue_index + 1))
198
/* The minimal value of the INSN_TICK of an instruction.  */
199
#define MIN_TICK (-max_insn_queue_index)
200
 
201
/* Issue points are used to distinguish between instructions in max_issue ().
202
   For now, all instructions are equally good.  */
203
#define ISSUE_POINTS(INSN) 1
204
 
205
/* List of important notes we must keep around.  This is a pointer to the
206
   last element in the list.  */
207
rtx note_list;
208
 
209
static struct spec_info_def spec_info_var;
210
/* Description of the speculative part of the scheduling.
211
   If NULL - no speculation.  */
212
spec_info_t spec_info = NULL;
213
 
214
/* True, if recovery block was added during scheduling of current block.
215
   Used to determine, if we need to fix INSN_TICKs.  */
216
static bool haifa_recovery_bb_recently_added_p;
217
 
218
/* True, if recovery block was added during this scheduling pass.
219
   Used to determine if we should have empty memory pools of dependencies
220
   after finishing current region.  */
221
bool haifa_recovery_bb_ever_added_p;
222
 
223
/* Counters of different types of speculative instructions.  */
224
static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
225
 
226
/* Array used in {unlink, restore}_bb_notes.  */
227
static rtx *bb_header = 0;
228
 
229
/* Basic block after which recovery blocks will be created.  */
230
static basic_block before_recovery;
231
 
232
/* Basic block just before the EXIT_BLOCK and after recovery, if we have
233
   created it.  */
234
basic_block after_recovery;
235
 
236
/* FALSE if we add bb to another region, so we don't need to initialize it.  */
237
bool adding_bb_to_current_region_p = true;
238
 
239
/* Queues, etc.  */
240
 
241
/* An instruction is ready to be scheduled when all insns preceding it
242
   have already been scheduled.  It is important to ensure that all
243
   insns which use its result will not be executed until its result
244
   has been computed.  An insn is maintained in one of four structures:
245
 
246
   (P) the "Pending" set of insns which cannot be scheduled until
247
   their dependencies have been satisfied.
248
   (Q) the "Queued" set of insns that can be scheduled when sufficient
249
   time has passed.
250
   (R) the "Ready" list of unscheduled, uncommitted insns.
251
   (S) the "Scheduled" list of insns.
252
 
253
   Initially, all insns are either "Pending" or "Ready" depending on
254
   whether their dependencies are satisfied.
255
 
256
   Insns move from the "Ready" list to the "Scheduled" list as they
257
   are committed to the schedule.  As this occurs, the insns in the
258
   "Pending" list have their dependencies satisfied and move to either
259
   the "Ready" list or the "Queued" set depending on whether
260
   sufficient time has passed to make them ready.  As time passes,
261
   insns move from the "Queued" set to the "Ready" list.
262
 
263
   The "Pending" list (P) are the insns in the INSN_FORW_DEPS of the
264
   unscheduled insns, i.e., those that are ready, queued, and pending.
265
   The "Queued" set (Q) is implemented by the variable `insn_queue'.
266
   The "Ready" list (R) is implemented by the variables `ready' and
267
   `n_ready'.
268
   The "Scheduled" list (S) is the new insn chain built by this pass.
269
 
270
   The transition (R->S) is implemented in the scheduling loop in
271
   `schedule_block' when the best insn to schedule is chosen.
272
   The transitions (P->R and P->Q) are implemented in `schedule_insn' as
273
   insns move from the ready list to the scheduled list.
274
   The transition (Q->R) is implemented in 'queue_to_insn' as time
275
   passes or stalls are introduced.  */
276
 
277
/* Implement a circular buffer to delay instructions until sufficient
278
   time has passed.  For the new pipeline description interface,
279
   MAX_INSN_QUEUE_INDEX is a power of two minus one which is not less
280
   than maximal time of instruction execution computed by genattr.c on
281
   the base maximal time of functional unit reservations and getting a
282
   result.  This is the longest time an insn may be queued.  */
283
 
284
static rtx *insn_queue;
285
static int q_ptr = 0;
286
static int q_size = 0;
287
#define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
288
#define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
289
 
290
#define QUEUE_SCHEDULED (-3)
291
#define QUEUE_NOWHERE   (-2)
292
#define QUEUE_READY     (-1)
293
/* QUEUE_SCHEDULED - INSN is scheduled.
294
   QUEUE_NOWHERE   - INSN isn't scheduled yet and is neither in
295
   queue or ready list.
296
   QUEUE_READY     - INSN is in ready list.
297
   N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles.  */
298
 
299
#define QUEUE_INDEX(INSN) (HID (INSN)->queue_index)
300
 
301
/* The following variable value refers for all current and future
302
   reservations of the processor units.  */
303
state_t curr_state;
304
 
305
/* The following variable value is size of memory representing all
306
   current and future reservations of the processor units.  */
307
size_t dfa_state_size;
308
 
309
/* The following array is used to find the best insn from ready when
310
   the automaton pipeline interface is used.  */
311
char *ready_try = NULL;
312
 
313
/* The ready list.  */
314
struct ready_list ready = {NULL, 0, 0, 0, 0};
315
 
316
/* The pointer to the ready list (to be removed).  */
317
static struct ready_list *readyp = &ready;
318
 
319
/* Scheduling clock.  */
320
static int clock_var;
321
 
322
static int may_trap_exp (const_rtx, int);
323
 
324
/* Nonzero iff the address is comprised from at most 1 register.  */
325
#define CONST_BASED_ADDRESS_P(x)                        \
326
  (REG_P (x)                                    \
327
   || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS   \
328
        || (GET_CODE (x) == LO_SUM))                    \
329
       && (CONSTANT_P (XEXP (x, 0))                      \
330
           || CONSTANT_P (XEXP (x, 1)))))
331
 
332
/* Returns a class that insn with GET_DEST(insn)=x may belong to,
333
   as found by analyzing insn's expression.  */
334
 
335
 
336
static int haifa_luid_for_non_insn (rtx x);
337
 
338
/* Haifa version of sched_info hooks common to all headers.  */
339
const struct common_sched_info_def haifa_common_sched_info =
340
  {
341
    NULL, /* fix_recovery_cfg */
342
    NULL, /* add_block */
343
    NULL, /* estimate_number_of_insns */
344
    haifa_luid_for_non_insn, /* luid_for_non_insn */
345
    SCHED_PASS_UNKNOWN /* sched_pass_id */
346
  };
347
 
348
const struct sched_scan_info_def *sched_scan_info;
349
 
350
/* Mapping from instruction UID to its Logical UID.  */
351
VEC (int, heap) *sched_luids = NULL;
352
 
353
/* Next LUID to assign to an instruction.  */
354
int sched_max_luid = 1;
355
 
356
/* Haifa Instruction Data.  */
357
VEC (haifa_insn_data_def, heap) *h_i_d = NULL;
358
 
359
void (* sched_init_only_bb) (basic_block, basic_block);
360
 
361
/* Split block function.  Different schedulers might use different functions
362
   to handle their internal data consistent.  */
363
basic_block (* sched_split_block) (basic_block, rtx);
364
 
365
/* Create empty basic block after the specified block.  */
366
basic_block (* sched_create_empty_bb) (basic_block);
367
 
368
static int
369
may_trap_exp (const_rtx x, int is_store)
370
{
371
  enum rtx_code code;
372
 
373
  if (x == 0)
374
    return TRAP_FREE;
375
  code = GET_CODE (x);
376
  if (is_store)
377
    {
378
      if (code == MEM && may_trap_p (x))
379
        return TRAP_RISKY;
380
      else
381
        return TRAP_FREE;
382
    }
383
  if (code == MEM)
384
    {
385
      /* The insn uses memory:  a volatile load.  */
386
      if (MEM_VOLATILE_P (x))
387
        return IRISKY;
388
      /* An exception-free load.  */
389
      if (!may_trap_p (x))
390
        return IFREE;
391
      /* A load with 1 base register, to be further checked.  */
392
      if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
393
        return PFREE_CANDIDATE;
394
      /* No info on the load, to be further checked.  */
395
      return PRISKY_CANDIDATE;
396
    }
397
  else
398
    {
399
      const char *fmt;
400
      int i, insn_class = TRAP_FREE;
401
 
402
      /* Neither store nor load, check if it may cause a trap.  */
403
      if (may_trap_p (x))
404
        return TRAP_RISKY;
405
      /* Recursive step: walk the insn...  */
406
      fmt = GET_RTX_FORMAT (code);
407
      for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
408
        {
409
          if (fmt[i] == 'e')
410
            {
411
              int tmp_class = may_trap_exp (XEXP (x, i), is_store);
412
              insn_class = WORST_CLASS (insn_class, tmp_class);
413
            }
414
          else if (fmt[i] == 'E')
415
            {
416
              int j;
417
              for (j = 0; j < XVECLEN (x, i); j++)
418
                {
419
                  int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
420
                  insn_class = WORST_CLASS (insn_class, tmp_class);
421
                  if (insn_class == TRAP_RISKY || insn_class == IRISKY)
422
                    break;
423
                }
424
            }
425
          if (insn_class == TRAP_RISKY || insn_class == IRISKY)
426
            break;
427
        }
428
      return insn_class;
429
    }
430
}
431
 
432
/* Classifies rtx X of an insn for the purpose of verifying that X can be
433
   executed speculatively (and consequently the insn can be moved
434
   speculatively), by examining X, returning:
435
   TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
436
   TRAP_FREE: non-load insn.
437
   IFREE: load from a globally safe location.
438
   IRISKY: volatile load.
439
   PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
440
   being either PFREE or PRISKY.  */
441
 
442
static int
443
haifa_classify_rtx (const_rtx x)
444
{
445
  int tmp_class = TRAP_FREE;
446
  int insn_class = TRAP_FREE;
447
  enum rtx_code code;
448
 
449
  if (GET_CODE (x) == PARALLEL)
450
    {
451
      int i, len = XVECLEN (x, 0);
452
 
453
      for (i = len - 1; i >= 0; i--)
454
        {
455
          tmp_class = haifa_classify_rtx (XVECEXP (x, 0, i));
456
          insn_class = WORST_CLASS (insn_class, tmp_class);
457
          if (insn_class == TRAP_RISKY || insn_class == IRISKY)
458
            break;
459
        }
460
    }
461
  else
462
    {
463
      code = GET_CODE (x);
464
      switch (code)
465
        {
466
        case CLOBBER:
467
          /* Test if it is a 'store'.  */
468
          tmp_class = may_trap_exp (XEXP (x, 0), 1);
469
          break;
470
        case SET:
471
          /* Test if it is a store.  */
472
          tmp_class = may_trap_exp (SET_DEST (x), 1);
473
          if (tmp_class == TRAP_RISKY)
474
            break;
475
          /* Test if it is a load.  */
476
          tmp_class =
477
            WORST_CLASS (tmp_class,
478
                         may_trap_exp (SET_SRC (x), 0));
479
          break;
480
        case COND_EXEC:
481
          tmp_class = haifa_classify_rtx (COND_EXEC_CODE (x));
482
          if (tmp_class == TRAP_RISKY)
483
            break;
484
          tmp_class = WORST_CLASS (tmp_class,
485
                                   may_trap_exp (COND_EXEC_TEST (x), 0));
486
          break;
487
        case TRAP_IF:
488
          tmp_class = TRAP_RISKY;
489
          break;
490
        default:;
491
        }
492
      insn_class = tmp_class;
493
    }
494
 
495
  return insn_class;
496
}
497
 
498
int
499
haifa_classify_insn (const_rtx insn)
500
{
501
  return haifa_classify_rtx (PATTERN (insn));
502
}
503
 
504
/* Forward declarations.  */
505
 
506
static int priority (rtx);
507
static int rank_for_schedule (const void *, const void *);
508
static void swap_sort (rtx *, int);
509
static void queue_insn (rtx, int);
510
static int schedule_insn (rtx);
511
static void adjust_priority (rtx);
512
static void advance_one_cycle (void);
513
static void extend_h_i_d (void);
514
 
515
 
516
/* Notes handling mechanism:
517
   =========================
518
   Generally, NOTES are saved before scheduling and restored after scheduling.
519
   The scheduler distinguishes between two types of notes:
520
 
521
   (1) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
522
   Before scheduling a region, a pointer to the note is added to the insn
523
   that follows or precedes it.  (This happens as part of the data dependence
524
   computation).  After scheduling an insn, the pointer contained in it is
525
   used for regenerating the corresponding note (in reemit_notes).
526
 
527
   (2) All other notes (e.g. INSN_DELETED):  Before scheduling a block,
528
   these notes are put in a list (in rm_other_notes() and
529
   unlink_other_notes ()).  After scheduling the block, these notes are
530
   inserted at the beginning of the block (in schedule_block()).  */
531
 
532
static void ready_add (struct ready_list *, rtx, bool);
533
static rtx ready_remove_first (struct ready_list *);
534
 
535
static void queue_to_ready (struct ready_list *);
536
static int early_queue_to_ready (state_t, struct ready_list *);
537
 
538
static void debug_ready_list (struct ready_list *);
539
 
540
/* The following functions are used to implement multi-pass scheduling
541
   on the first cycle.  */
542
static rtx ready_remove (struct ready_list *, int);
543
static void ready_remove_insn (rtx);
544
 
545
static int choose_ready (struct ready_list *, rtx *);
546
 
547
static void fix_inter_tick (rtx, rtx);
548
static int fix_tick_ready (rtx);
549
static void change_queue_index (rtx, int);
550
 
551
/* The following functions are used to implement scheduling of data/control
552
   speculative instructions.  */
553
 
554
static void extend_h_i_d (void);
555
static void init_h_i_d (rtx);
556
static void generate_recovery_code (rtx);
557
static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
558
static void begin_speculative_block (rtx);
559
static void add_to_speculative_block (rtx);
560
static void init_before_recovery (basic_block *);
561
static void create_check_block_twin (rtx, bool);
562
static void fix_recovery_deps (basic_block);
563
static void haifa_change_pattern (rtx, rtx);
564
static void dump_new_block_header (int, basic_block, rtx, rtx);
565
static void restore_bb_notes (basic_block);
566
static void fix_jump_move (rtx);
567
static void move_block_after_check (rtx);
568
static void move_succs (VEC(edge,gc) **, basic_block);
569
static void sched_remove_insn (rtx);
570
static void clear_priorities (rtx, rtx_vec_t *);
571
static void calc_priorities (rtx_vec_t);
572
static void add_jump_dependencies (rtx, rtx);
573
#ifdef ENABLE_CHECKING
574
static int has_edge_p (VEC(edge,gc) *, int);
575
static void check_cfg (rtx, rtx);
576
#endif
577
 
578
#endif /* INSN_SCHEDULING */
579
 
580
/* Point to state used for the current scheduling pass.  */
581
struct haifa_sched_info *current_sched_info;
582
 
583
#ifndef INSN_SCHEDULING
584
void
585
schedule_insns (void)
586
{
587
}
588
#else
589
 
590
/* Do register pressure sensitive insn scheduling if the flag is set
591
   up.  */
592
bool sched_pressure_p;
593
 
594
/* Map regno -> its cover class.  The map defined only when
595
   SCHED_PRESSURE_P is true.  */
596
enum reg_class *sched_regno_cover_class;
597
 
598
/* The current register pressure.  Only elements corresponding cover
599
   classes are defined.  */
600
static int curr_reg_pressure[N_REG_CLASSES];
601
 
602
/* Saved value of the previous array.  */
603
static int saved_reg_pressure[N_REG_CLASSES];
604
 
605
/* Register living at given scheduling point.  */
606
static bitmap curr_reg_live;
607
 
608
/* Saved value of the previous array.  */
609
static bitmap saved_reg_live;
610
 
611
/* Registers mentioned in the current region.  */
612
static bitmap region_ref_regs;
613
 
614
/* Initiate register pressure relative info for scheduling the current
615
   region.  Currently it is only clearing register mentioned in the
616
   current region.  */
617
void
618
sched_init_region_reg_pressure_info (void)
619
{
620
  bitmap_clear (region_ref_regs);
621
}
622
 
623
/* Update current register pressure related info after birth (if
624
   BIRTH_P) or death of register REGNO.  */
625
static void
626
mark_regno_birth_or_death (int regno, bool birth_p)
627
{
628
  enum reg_class cover_class;
629
 
630
  cover_class = sched_regno_cover_class[regno];
631
  if (regno >= FIRST_PSEUDO_REGISTER)
632
    {
633
      if (cover_class != NO_REGS)
634
        {
635
          if (birth_p)
636
            {
637
              bitmap_set_bit (curr_reg_live, regno);
638
              curr_reg_pressure[cover_class]
639
                += ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
640
            }
641
          else
642
            {
643
              bitmap_clear_bit (curr_reg_live, regno);
644
              curr_reg_pressure[cover_class]
645
                -= ira_reg_class_nregs[cover_class][PSEUDO_REGNO_MODE (regno)];
646
            }
647
        }
648
    }
649
  else if (cover_class != NO_REGS
650
           && ! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
651
    {
652
      if (birth_p)
653
        {
654
          bitmap_set_bit (curr_reg_live, regno);
655
          curr_reg_pressure[cover_class]++;
656
        }
657
      else
658
        {
659
          bitmap_clear_bit (curr_reg_live, regno);
660
          curr_reg_pressure[cover_class]--;
661
        }
662
    }
663
}
664
 
665
/* Initiate current register pressure related info from living
666
   registers given by LIVE.  */
667
static void
668
initiate_reg_pressure_info (bitmap live)
669
{
670
  int i;
671
  unsigned int j;
672
  bitmap_iterator bi;
673
 
674
  for (i = 0; i < ira_reg_class_cover_size; i++)
675
    curr_reg_pressure[ira_reg_class_cover[i]] = 0;
676
  bitmap_clear (curr_reg_live);
677
  EXECUTE_IF_SET_IN_BITMAP (live, 0, j, bi)
678
    if (current_nr_blocks == 1 || bitmap_bit_p (region_ref_regs, j))
679
      mark_regno_birth_or_death (j, true);
680
}
681
 
682
/* Mark registers in X as mentioned in the current region.  */
683
static void
684
setup_ref_regs (rtx x)
685
{
686
  int i, j, regno;
687
  const RTX_CODE code = GET_CODE (x);
688
  const char *fmt;
689
 
690
  if (REG_P (x))
691
    {
692
      regno = REGNO (x);
693
      if (regno >= FIRST_PSEUDO_REGISTER)
694
        bitmap_set_bit (region_ref_regs, REGNO (x));
695
      else
696
        for (i = hard_regno_nregs[regno][GET_MODE (x)] - 1; i >= 0; i--)
697
          bitmap_set_bit (region_ref_regs, regno + i);
698
      return;
699
    }
700
  fmt = GET_RTX_FORMAT (code);
701
  for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
702
    if (fmt[i] == 'e')
703
      setup_ref_regs (XEXP (x, i));
704
    else if (fmt[i] == 'E')
705
      {
706
        for (j = 0; j < XVECLEN (x, i); j++)
707
          setup_ref_regs (XVECEXP (x, i, j));
708
      }
709
}
710
 
711
/* Initiate current register pressure related info at the start of
712
   basic block BB.  */
713
static void
714
initiate_bb_reg_pressure_info (basic_block bb)
715
{
716
  /* JPB 1-Sep-10: Fixed warning. */
717
#ifdef EH_RETURN_DATA_REGNO
718
  unsigned int i;
719
#endif
720
  rtx insn;
721
 
722
  if (current_nr_blocks > 1)
723
    FOR_BB_INSNS (bb, insn)
724
      if (NONDEBUG_INSN_P (insn))
725
        setup_ref_regs (PATTERN (insn));
726
  initiate_reg_pressure_info (df_get_live_in (bb));
727
#ifdef EH_RETURN_DATA_REGNO
728
  if (bb_has_eh_pred (bb))
729
    for (i = 0; ; ++i)
730
      {
731
        unsigned int regno = EH_RETURN_DATA_REGNO (i);
732
 
733
        if (regno == INVALID_REGNUM)
734
          break;
735
        if (! bitmap_bit_p (df_get_live_in (bb), regno))
736
          mark_regno_birth_or_death (regno, true);
737
      }
738
#endif
739
}
740
 
741
/* Save current register pressure related info.  */
742
static void
743
save_reg_pressure (void)
744
{
745
  int i;
746
 
747
  for (i = 0; i < ira_reg_class_cover_size; i++)
748
    saved_reg_pressure[ira_reg_class_cover[i]]
749
      = curr_reg_pressure[ira_reg_class_cover[i]];
750
  bitmap_copy (saved_reg_live, curr_reg_live);
751
}
752
 
753
/* Restore saved register pressure related info.  */
754
static void
755
restore_reg_pressure (void)
756
{
757
  int i;
758
 
759
  for (i = 0; i < ira_reg_class_cover_size; i++)
760
    curr_reg_pressure[ira_reg_class_cover[i]]
761
      = saved_reg_pressure[ira_reg_class_cover[i]];
762
  bitmap_copy (curr_reg_live, saved_reg_live);
763
}
764
 
765
/* Return TRUE if the register is dying after its USE.  */
766
static bool
767
dying_use_p (struct reg_use_data *use)
768
{
769
  struct reg_use_data *next;
770
 
771
  for (next = use->next_regno_use; next != use; next = next->next_regno_use)
772
    if (NONDEBUG_INSN_P (next->insn)
773
        && QUEUE_INDEX (next->insn) != QUEUE_SCHEDULED)
774
      return false;
775
  return true;
776
}
777
 
778
/* Print info about the current register pressure and its excess for
779
   each cover class.  */
780
static void
781
print_curr_reg_pressure (void)
782
{
783
  int i;
784
  enum reg_class cl;
785
 
786
  fprintf (sched_dump, ";;\t");
787
  for (i = 0; i < ira_reg_class_cover_size; i++)
788
    {
789
      cl = ira_reg_class_cover[i];
790
      gcc_assert (curr_reg_pressure[cl] >= 0);
791
      fprintf (sched_dump, "  %s:%d(%d)", reg_class_names[cl],
792
               curr_reg_pressure[cl],
793
               curr_reg_pressure[cl] - ira_available_class_regs[cl]);
794
    }
795
  fprintf (sched_dump, "\n");
796
}
797
 
798
/* Pointer to the last instruction scheduled.  Used by rank_for_schedule,
799
   so that insns independent of the last scheduled insn will be preferred
800
   over dependent instructions.  */
801
 
802
static rtx last_scheduled_insn;
803
 
804
/* Cached cost of the instruction.  Use below function to get cost of the
805
   insn.  -1 here means that the field is not initialized.  */
806
#define INSN_COST(INSN) (HID (INSN)->cost)
807
 
808
/* Compute cost of executing INSN.
809
   This is the number of cycles between instruction issue and
810
   instruction results.  */
811
int
812
insn_cost (rtx insn)
813
{
814
  int cost;
815
 
816
  if (sel_sched_p ())
817
    {
818
      if (recog_memoized (insn) < 0)
819
        return 0;
820
 
821
      cost = insn_default_latency (insn);
822
      if (cost < 0)
823
        cost = 0;
824
 
825
      return cost;
826
    }
827
 
828
  cost = INSN_COST (insn);
829
 
830
  if (cost < 0)
831
    {
832
      /* A USE insn, or something else we don't need to
833
         understand.  We can't pass these directly to
834
         result_ready_cost or insn_default_latency because it will
835
         trigger a fatal error for unrecognizable insns.  */
836
      if (recog_memoized (insn) < 0)
837
        {
838
          INSN_COST (insn) = 0;
839
          return 0;
840
        }
841
      else
842
        {
843
          cost = insn_default_latency (insn);
844
          if (cost < 0)
845
            cost = 0;
846
 
847
          INSN_COST (insn) = cost;
848
        }
849
    }
850
 
851
  return cost;
852
}
853
 
854
/* Compute cost of dependence LINK.
855
   This is the number of cycles between instruction issue and
856
   instruction results.
857
   ??? We also use this function to call recog_memoized on all insns.  */
858
int
859
dep_cost_1 (dep_t link, dw_t dw)
860
{
861
  rtx insn = DEP_PRO (link);
862
  rtx used = DEP_CON (link);
863
  int cost;
864
 
865
  /* A USE insn should never require the value used to be computed.
866
     This allows the computation of a function's result and parameter
867
     values to overlap the return and call.  We don't care about the
868
     the dependence cost when only decreasing register pressure.  */
869
  if (recog_memoized (used) < 0)
870
    {
871
      cost = 0;
872
      recog_memoized (insn);
873
    }
874
  else
875
    {
876
      enum reg_note dep_type = DEP_TYPE (link);
877
 
878
      cost = insn_cost (insn);
879
 
880
      if (INSN_CODE (insn) >= 0)
881
        {
882
          if (dep_type == REG_DEP_ANTI)
883
            cost = 0;
884
          else if (dep_type == REG_DEP_OUTPUT)
885
            {
886
              cost = (insn_default_latency (insn)
887
                      - insn_default_latency (used));
888
              if (cost <= 0)
889
                cost = 1;
890
            }
891
          else if (bypass_p (insn))
892
            cost = insn_latency (insn, used);
893
        }
894
 
895
 
896
      if (targetm.sched.adjust_cost_2)
897
        cost = targetm.sched.adjust_cost_2 (used, (int) dep_type, insn, cost,
898
                                            dw);
899
      else if (targetm.sched.adjust_cost != NULL)
900
        {
901
          /* This variable is used for backward compatibility with the
902
             targets.  */
903
          rtx dep_cost_rtx_link = alloc_INSN_LIST (NULL_RTX, NULL_RTX);
904
 
905
          /* Make it self-cycled, so that if some tries to walk over this
906
             incomplete list he/she will be caught in an endless loop.  */
907
          XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
908
 
909
          /* Targets use only REG_NOTE_KIND of the link.  */
910
          PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
911
 
912
          cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
913
                                            insn, cost);
914
 
915
          free_INSN_LIST_node (dep_cost_rtx_link);
916
        }
917
 
918
      if (cost < 0)
919
        cost = 0;
920
    }
921
 
922
  return cost;
923
}
924
 
925
/* Compute cost of dependence LINK.
926
   This is the number of cycles between instruction issue and
927
   instruction results.  */
928
int
929
dep_cost (dep_t link)
930
{
931
  return dep_cost_1 (link, 0);
932
}
933
 
934
/* Use this sel-sched.c friendly function in reorder2 instead of increasing
935
   INSN_PRIORITY explicitly.  */
936
void
937
increase_insn_priority (rtx insn, int amount)
938
{
939
  if (!sel_sched_p ())
940
    {
941
      /* We're dealing with haifa-sched.c INSN_PRIORITY.  */
942
      if (INSN_PRIORITY_KNOWN (insn))
943
          INSN_PRIORITY (insn) += amount;
944
    }
945
  else
946
    {
947
      /* In sel-sched.c INSN_PRIORITY is not kept up to date.
948
         Use EXPR_PRIORITY instead. */
949
      sel_add_to_insn_priority (insn, amount);
950
    }
951
}
952
 
953
/* Return 'true' if DEP should be included in priority calculations.  */
954
static bool
955
contributes_to_priority_p (dep_t dep)
956
{
957
  if (DEBUG_INSN_P (DEP_CON (dep))
958
      || DEBUG_INSN_P (DEP_PRO (dep)))
959
    return false;
960
 
961
  /* Critical path is meaningful in block boundaries only.  */
962
  if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
963
                                                    DEP_PRO (dep)))
964
    return false;
965
 
966
  /* If flag COUNT_SPEC_IN_CRITICAL_PATH is set,
967
     then speculative instructions will less likely be
968
     scheduled.  That is because the priority of
969
     their producers will increase, and, thus, the
970
     producers will more likely be scheduled, thus,
971
     resolving the dependence.  */
972
  if (sched_deps_info->generate_spec_deps
973
      && !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
974
      && (DEP_STATUS (dep) & SPECULATIVE))
975
    return false;
976
 
977
  return true;
978
}
979
 
980
/* Compute the number of nondebug forward deps of an insn.  */
981
 
982
static int
983
dep_list_size (rtx insn)
984
{
985
  sd_iterator_def sd_it;
986
  dep_t dep;
987
  int dbgcount = 0, nodbgcount = 0;
988
 
989
  if (!MAY_HAVE_DEBUG_INSNS)
990
    return sd_lists_size (insn, SD_LIST_FORW);
991
 
992
  FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
993
    {
994
      if (DEBUG_INSN_P (DEP_CON (dep)))
995
        dbgcount++;
996
      else if (!DEBUG_INSN_P (DEP_PRO (dep)))
997
        nodbgcount++;
998
    }
999
 
1000
  gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, SD_LIST_FORW));
1001
 
1002
  return nodbgcount;
1003
}
1004
 
1005
/* Compute the priority number for INSN.  */
1006
static int
1007
priority (rtx insn)
1008
{
1009
  if (! INSN_P (insn))
1010
    return 0;
1011
 
1012
  /* We should not be interested in priority of an already scheduled insn.  */
1013
  gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
1014
 
1015
  if (!INSN_PRIORITY_KNOWN (insn))
1016
    {
1017
      int this_priority = -1;
1018
 
1019
      if (dep_list_size (insn) == 0)
1020
        /* ??? We should set INSN_PRIORITY to insn_cost when and insn has
1021
           some forward deps but all of them are ignored by
1022
           contributes_to_priority hook.  At the moment we set priority of
1023
           such insn to 0.  */
1024
        this_priority = insn_cost (insn);
1025
      else
1026
        {
1027
          rtx prev_first, twin;
1028
          basic_block rec;
1029
 
1030
          /* For recovery check instructions we calculate priority slightly
1031
             different than that of normal instructions.  Instead of walking
1032
             through INSN_FORW_DEPS (check) list, we walk through
1033
             INSN_FORW_DEPS list of each instruction in the corresponding
1034
             recovery block.  */
1035
 
1036
          /* Selective scheduling does not define RECOVERY_BLOCK macro.  */
1037
          rec = sel_sched_p () ? NULL : RECOVERY_BLOCK (insn);
1038
          if (!rec || rec == EXIT_BLOCK_PTR)
1039
            {
1040
              prev_first = PREV_INSN (insn);
1041
              twin = insn;
1042
            }
1043
          else
1044
            {
1045
              prev_first = NEXT_INSN (BB_HEAD (rec));
1046
              twin = PREV_INSN (BB_END (rec));
1047
            }
1048
 
1049
          do
1050
            {
1051
              sd_iterator_def sd_it;
1052
              dep_t dep;
1053
 
1054
              FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
1055
                {
1056
                  rtx next;
1057
                  int next_priority;
1058
 
1059
                  next = DEP_CON (dep);
1060
 
1061
                  if (BLOCK_FOR_INSN (next) != rec)
1062
                    {
1063
                      int cost;
1064
 
1065
                      if (!contributes_to_priority_p (dep))
1066
                        continue;
1067
 
1068
                      if (twin == insn)
1069
                        cost = dep_cost (dep);
1070
                      else
1071
                        {
1072
                          struct _dep _dep1, *dep1 = &_dep1;
1073
 
1074
                          init_dep (dep1, insn, next, REG_DEP_ANTI);
1075
 
1076
                          cost = dep_cost (dep1);
1077
                        }
1078
 
1079
                      next_priority = cost + priority (next);
1080
 
1081
                      if (next_priority > this_priority)
1082
                        this_priority = next_priority;
1083
                    }
1084
                }
1085
 
1086
              twin = PREV_INSN (twin);
1087
            }
1088
          while (twin != prev_first);
1089
        }
1090
 
1091
      if (this_priority < 0)
1092
        {
1093
          gcc_assert (this_priority == -1);
1094
 
1095
          this_priority = insn_cost (insn);
1096
        }
1097
 
1098
      INSN_PRIORITY (insn) = this_priority;
1099
      INSN_PRIORITY_STATUS (insn) = 1;
1100
    }
1101
 
1102
  return INSN_PRIORITY (insn);
1103
}
1104
 
1105
/* Macros and functions for keeping the priority queue sorted, and
1106
   dealing with queuing and dequeuing of instructions.  */
1107
 
1108
#define SCHED_SORT(READY, N_READY)                                   \
1109
do { if ((N_READY) == 2)                                             \
1110
       swap_sort (READY, N_READY);                                   \
1111
     else if ((N_READY) > 2)                                         \
1112
         qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); }  \
1113
while (0)
1114
 
1115
/* Setup info about the current register pressure impact of scheduling
1116
   INSN at the current scheduling point.  */
1117
static void
1118
setup_insn_reg_pressure_info (rtx insn)
1119
{
1120
  int i, change, before, after, hard_regno;
1121
  int excess_cost_change;
1122
  enum machine_mode mode;
1123
  enum reg_class cl;
1124
  struct reg_pressure_data *pressure_info;
1125
  int *max_reg_pressure;
1126
  struct reg_use_data *use;
1127
  static int death[N_REG_CLASSES];
1128
 
1129
  excess_cost_change = 0;
1130
  for (i = 0; i < ira_reg_class_cover_size; i++)
1131
    death[ira_reg_class_cover[i]] = 0;
1132
  for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1133
    if (dying_use_p (use))
1134
      {
1135
        cl = sched_regno_cover_class[use->regno];
1136
        if (use->regno < FIRST_PSEUDO_REGISTER)
1137
          death[cl]++;
1138
        else
1139
          death[cl] += ira_reg_class_nregs[cl][PSEUDO_REGNO_MODE (use->regno)];
1140
      }
1141
  pressure_info = INSN_REG_PRESSURE (insn);
1142
  max_reg_pressure = INSN_MAX_REG_PRESSURE (insn);
1143
  gcc_assert (pressure_info != NULL && max_reg_pressure != NULL);
1144
  for (i = 0; i < ira_reg_class_cover_size; i++)
1145
    {
1146
      cl = ira_reg_class_cover[i];
1147
      gcc_assert (curr_reg_pressure[cl] >= 0);
1148
      change = (int) pressure_info[i].set_increase - death[cl];
1149
      before = MAX (0, max_reg_pressure[i] - ira_available_class_regs[cl]);
1150
      after = MAX (0, max_reg_pressure[i] + change
1151
                   - ira_available_class_regs[cl]);
1152
      hard_regno = ira_class_hard_regs[cl][0];
1153
      gcc_assert (hard_regno >= 0);
1154
      mode = reg_raw_mode[hard_regno];
1155
      excess_cost_change += ((after - before)
1156
                             * (ira_memory_move_cost[mode][cl][0]
1157
                                + ira_memory_move_cost[mode][cl][1]));
1158
    }
1159
  INSN_REG_PRESSURE_EXCESS_COST_CHANGE (insn) = excess_cost_change;
1160
}
1161
 
1162
/* Returns a positive value if x is preferred; returns a negative value if
1163
   y is preferred.  Should never return 0, since that will make the sort
1164
   unstable.  */
1165
 
1166
static int
1167
rank_for_schedule (const void *x, const void *y)
1168
{
1169
  rtx tmp = *(const rtx *) y;
1170
  rtx tmp2 = *(const rtx *) x;
1171
  rtx last;
1172
  int tmp_class, tmp2_class;
1173
  int val, priority_val, info_val;
1174
 
1175
  if (MAY_HAVE_DEBUG_INSNS)
1176
    {
1177
      /* Schedule debug insns as early as possible.  */
1178
      if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2))
1179
        return -1;
1180
      else if (DEBUG_INSN_P (tmp2))
1181
        return 1;
1182
    }
1183
 
1184
  /* The insn in a schedule group should be issued the first.  */
1185
  if (flag_sched_group_heuristic &&
1186
      SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
1187
    return SCHED_GROUP_P (tmp2) ? 1 : -1;
1188
 
1189
  /* Make sure that priority of TMP and TMP2 are initialized.  */
1190
  gcc_assert (INSN_PRIORITY_KNOWN (tmp) && INSN_PRIORITY_KNOWN (tmp2));
1191
 
1192
  if (sched_pressure_p)
1193
    {
1194
      int diff;
1195
 
1196
      /* Prefer insn whose scheduling results in the smallest register
1197
         pressure excess.  */
1198
      if ((diff = (INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp)
1199
                   + (INSN_TICK (tmp) > clock_var
1200
                      ? INSN_TICK (tmp) - clock_var : 0)
1201
                   - INSN_REG_PRESSURE_EXCESS_COST_CHANGE (tmp2)
1202
                   - (INSN_TICK (tmp2) > clock_var
1203
                      ? INSN_TICK (tmp2) - clock_var : 0))) != 0)
1204
        return diff;
1205
    }
1206
 
1207
 
1208
  if (sched_pressure_p
1209
      && (INSN_TICK (tmp2) > clock_var || INSN_TICK (tmp) > clock_var))
1210
    {
1211
      if (INSN_TICK (tmp) <= clock_var)
1212
        return -1;
1213
      else if (INSN_TICK (tmp2) <= clock_var)
1214
        return 1;
1215
      else
1216
        return INSN_TICK (tmp) - INSN_TICK (tmp2);
1217
    }
1218
  /* Prefer insn with higher priority.  */
1219
  priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
1220
 
1221
  if (flag_sched_critical_path_heuristic && priority_val)
1222
    return priority_val;
1223
 
1224
  /* Prefer speculative insn with greater dependencies weakness.  */
1225
  if (flag_sched_spec_insn_heuristic && spec_info)
1226
    {
1227
      ds_t ds1, ds2;
1228
      dw_t dw1, dw2;
1229
      int dw;
1230
 
1231
      ds1 = TODO_SPEC (tmp) & SPECULATIVE;
1232
      if (ds1)
1233
        dw1 = ds_weak (ds1);
1234
      else
1235
        dw1 = NO_DEP_WEAK;
1236
 
1237
      ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
1238
      if (ds2)
1239
        dw2 = ds_weak (ds2);
1240
      else
1241
        dw2 = NO_DEP_WEAK;
1242
 
1243
      dw = dw2 - dw1;
1244
      if (dw > (NO_DEP_WEAK / 8) || dw < -(NO_DEP_WEAK / 8))
1245
        return dw;
1246
    }
1247
 
1248
  info_val = (*current_sched_info->rank) (tmp, tmp2);
1249
  if(flag_sched_rank_heuristic && info_val)
1250
    return info_val;
1251
 
1252
  if (flag_sched_last_insn_heuristic)
1253
    {
1254
      last = last_scheduled_insn;
1255
 
1256
      if (DEBUG_INSN_P (last) && last != current_sched_info->prev_head)
1257
        do
1258
          last = PREV_INSN (last);
1259
        while (!NONDEBUG_INSN_P (last)
1260
               && last != current_sched_info->prev_head);
1261
    }
1262
 
1263
  /* Compare insns based on their relation to the last scheduled
1264
     non-debug insn.  */
1265
  if (flag_sched_last_insn_heuristic && NONDEBUG_INSN_P (last))
1266
    {
1267
      dep_t dep1;
1268
      dep_t dep2;
1269
 
1270
      /* Classify the instructions into three classes:
1271
         1) Data dependent on last schedule insn.
1272
         2) Anti/Output dependent on last scheduled insn.
1273
         3) Independent of last scheduled insn, or has latency of one.
1274
         Choose the insn from the highest numbered class if different.  */
1275
      dep1 = sd_find_dep_between (last, tmp, true);
1276
 
1277
      if (dep1 == NULL || dep_cost (dep1) == 1)
1278
        tmp_class = 3;
1279
      else if (/* Data dependence.  */
1280
               DEP_TYPE (dep1) == REG_DEP_TRUE)
1281
        tmp_class = 1;
1282
      else
1283
        tmp_class = 2;
1284
 
1285
      dep2 = sd_find_dep_between (last, tmp2, true);
1286
 
1287
      if (dep2 == NULL || dep_cost (dep2)  == 1)
1288
        tmp2_class = 3;
1289
      else if (/* Data dependence.  */
1290
               DEP_TYPE (dep2) == REG_DEP_TRUE)
1291
        tmp2_class = 1;
1292
      else
1293
        tmp2_class = 2;
1294
 
1295
      if ((val = tmp2_class - tmp_class))
1296
        return val;
1297
    }
1298
 
1299
  /* Prefer the insn which has more later insns that depend on it.
1300
     This gives the scheduler more freedom when scheduling later
1301
     instructions at the expense of added register pressure.  */
1302
 
1303
  val = (dep_list_size (tmp2) - dep_list_size (tmp));
1304
 
1305
  if (flag_sched_dep_count_heuristic && val != 0)
1306
    return val;
1307
 
1308
  /* If insns are equally good, sort by INSN_LUID (original insn order),
1309
     so that we make the sort stable.  This minimizes instruction movement,
1310
     thus minimizing sched's effect on debugging and cross-jumping.  */
1311
  return INSN_LUID (tmp) - INSN_LUID (tmp2);
1312
}
1313
 
1314
/* Resort the array A in which only element at index N may be out of order.  */
1315
 
1316
HAIFA_INLINE static void
1317
swap_sort (rtx *a, int n)
1318
{
1319
  rtx insn = a[n - 1];
1320
  int i = n - 2;
1321
 
1322
  while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
1323
    {
1324
      a[i + 1] = a[i];
1325
      i -= 1;
1326
    }
1327
  a[i + 1] = insn;
1328
}
1329
 
1330
/* Add INSN to the insn queue so that it can be executed at least
1331
   N_CYCLES after the currently executing insn.  Preserve insns
1332
   chain for debugging purposes.  */
1333
 
1334
HAIFA_INLINE static void
1335
queue_insn (rtx insn, int n_cycles)
1336
{
1337
  int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
1338
  rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
1339
 
1340
  gcc_assert (n_cycles <= max_insn_queue_index);
1341
  gcc_assert (!DEBUG_INSN_P (insn));
1342
 
1343
  insn_queue[next_q] = link;
1344
  q_size += 1;
1345
 
1346
  if (sched_verbose >= 2)
1347
    {
1348
      fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
1349
               (*current_sched_info->print_insn) (insn, 0));
1350
 
1351
      fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
1352
    }
1353
 
1354
  QUEUE_INDEX (insn) = next_q;
1355
}
1356
 
1357
/* Remove INSN from queue.  */
1358
static void
1359
queue_remove (rtx insn)
1360
{
1361
  gcc_assert (QUEUE_INDEX (insn) >= 0);
1362
  remove_free_INSN_LIST_elem (insn, &insn_queue[QUEUE_INDEX (insn)]);
1363
  q_size--;
1364
  QUEUE_INDEX (insn) = QUEUE_NOWHERE;
1365
}
1366
 
1367
/* Return a pointer to the bottom of the ready list, i.e. the insn
1368
   with the lowest priority.  */
1369
 
1370
rtx *
1371
ready_lastpos (struct ready_list *ready)
1372
{
1373
  gcc_assert (ready->n_ready >= 1);
1374
  return ready->vec + ready->first - ready->n_ready + 1;
1375
}
1376
 
1377
/* Add an element INSN to the ready list so that it ends up with the
1378
   lowest/highest priority depending on FIRST_P.  */
1379
 
1380
HAIFA_INLINE static void
1381
ready_add (struct ready_list *ready, rtx insn, bool first_p)
1382
{
1383
  if (!first_p)
1384
    {
1385
      if (ready->first == ready->n_ready)
1386
        {
1387
          memmove (ready->vec + ready->veclen - ready->n_ready,
1388
                   ready_lastpos (ready),
1389
                   ready->n_ready * sizeof (rtx));
1390
          ready->first = ready->veclen - 1;
1391
        }
1392
      ready->vec[ready->first - ready->n_ready] = insn;
1393
    }
1394
  else
1395
    {
1396
      if (ready->first == ready->veclen - 1)
1397
        {
1398
          if (ready->n_ready)
1399
            /* ready_lastpos() fails when called with (ready->n_ready == 0).  */
1400
            memmove (ready->vec + ready->veclen - ready->n_ready - 1,
1401
                     ready_lastpos (ready),
1402
                     ready->n_ready * sizeof (rtx));
1403
          ready->first = ready->veclen - 2;
1404
        }
1405
      ready->vec[++(ready->first)] = insn;
1406
    }
1407
 
1408
  ready->n_ready++;
1409
  if (DEBUG_INSN_P (insn))
1410
    ready->n_debug++;
1411
 
1412
  gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
1413
  QUEUE_INDEX (insn) = QUEUE_READY;
1414
}
1415
 
1416
/* Remove the element with the highest priority from the ready list and
1417
   return it.  */
1418
 
1419
HAIFA_INLINE static rtx
1420
ready_remove_first (struct ready_list *ready)
1421
{
1422
  rtx t;
1423
 
1424
  gcc_assert (ready->n_ready);
1425
  t = ready->vec[ready->first--];
1426
  ready->n_ready--;
1427
  if (DEBUG_INSN_P (t))
1428
    ready->n_debug--;
1429
  /* If the queue becomes empty, reset it.  */
1430
  if (ready->n_ready == 0)
1431
    ready->first = ready->veclen - 1;
1432
 
1433
  gcc_assert (QUEUE_INDEX (t) == QUEUE_READY);
1434
  QUEUE_INDEX (t) = QUEUE_NOWHERE;
1435
 
1436
  return t;
1437
}
1438
 
1439
/* The following code implements multi-pass scheduling for the first
1440
   cycle.  In other words, we will try to choose ready insn which
1441
   permits to start maximum number of insns on the same cycle.  */
1442
 
1443
/* Return a pointer to the element INDEX from the ready.  INDEX for
1444
   insn with the highest priority is 0, and the lowest priority has
1445
   N_READY - 1.  */
1446
 
1447
rtx
1448
ready_element (struct ready_list *ready, int index)
1449
{
1450
  gcc_assert (ready->n_ready && index < ready->n_ready);
1451
 
1452
  return ready->vec[ready->first - index];
1453
}
1454
 
1455
/* Remove the element INDEX from the ready list and return it.  INDEX
1456
   for insn with the highest priority is 0, and the lowest priority
1457
   has N_READY - 1.  */
1458
 
1459
HAIFA_INLINE static rtx
1460
ready_remove (struct ready_list *ready, int index)
1461
{
1462
  rtx t;
1463
  int i;
1464
 
1465
  if (index == 0)
1466
    return ready_remove_first (ready);
1467
  gcc_assert (ready->n_ready && index < ready->n_ready);
1468
  t = ready->vec[ready->first - index];
1469
  ready->n_ready--;
1470
  if (DEBUG_INSN_P (t))
1471
    ready->n_debug--;
1472
  for (i = index; i < ready->n_ready; i++)
1473
    ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
1474
  QUEUE_INDEX (t) = QUEUE_NOWHERE;
1475
  return t;
1476
}
1477
 
1478
/* Remove INSN from the ready list.  */
1479
static void
1480
ready_remove_insn (rtx insn)
1481
{
1482
  int i;
1483
 
1484
  for (i = 0; i < readyp->n_ready; i++)
1485
    if (ready_element (readyp, i) == insn)
1486
      {
1487
        ready_remove (readyp, i);
1488
        return;
1489
      }
1490
  gcc_unreachable ();
1491
}
1492
 
1493
/* Sort the ready list READY by ascending priority, using the SCHED_SORT
1494
   macro.  */
1495
 
1496
void
1497
ready_sort (struct ready_list *ready)
1498
{
1499
  int i;
1500
  rtx *first = ready_lastpos (ready);
1501
 
1502
  if (sched_pressure_p)
1503
    {
1504
      for (i = 0; i < ready->n_ready; i++)
1505
        setup_insn_reg_pressure_info (first[i]);
1506
    }
1507
  SCHED_SORT (first, ready->n_ready);
1508
}
1509
 
1510
/* PREV is an insn that is ready to execute.  Adjust its priority if that
1511
   will help shorten or lengthen register lifetimes as appropriate.  Also
1512
   provide a hook for the target to tweak itself.  */
1513
 
1514
HAIFA_INLINE static void
1515
adjust_priority (rtx prev)
1516
{
1517
  /* ??? There used to be code here to try and estimate how an insn
1518
     affected register lifetimes, but it did it by looking at REG_DEAD
1519
     notes, which we removed in schedule_region.  Nor did it try to
1520
     take into account register pressure or anything useful like that.
1521
 
1522
     Revisit when we have a machine model to work with and not before.  */
1523
 
1524
  if (targetm.sched.adjust_priority)
1525
    INSN_PRIORITY (prev) =
1526
      targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
1527
}
1528
 
1529
/* Advance DFA state STATE on one cycle.  */
1530
void
1531
advance_state (state_t state)
1532
{
1533
  if (targetm.sched.dfa_pre_advance_cycle)
1534
    targetm.sched.dfa_pre_advance_cycle ();
1535
 
1536
  if (targetm.sched.dfa_pre_cycle_insn)
1537
    state_transition (state,
1538
                      targetm.sched.dfa_pre_cycle_insn ());
1539
 
1540
  state_transition (state, NULL);
1541
 
1542
  if (targetm.sched.dfa_post_cycle_insn)
1543
    state_transition (state,
1544
                      targetm.sched.dfa_post_cycle_insn ());
1545
 
1546
  if (targetm.sched.dfa_post_advance_cycle)
1547
    targetm.sched.dfa_post_advance_cycle ();
1548
}
1549
 
1550
/* Advance time on one cycle.  */
1551
HAIFA_INLINE static void
1552
advance_one_cycle (void)
1553
{
1554
  advance_state (curr_state);
1555
  if (sched_verbose >= 6)
1556
    fprintf (sched_dump, ";;\tAdvanced a state.\n");
1557
}
1558
 
1559
/* Clock at which the previous instruction was issued.  */
1560
static int last_clock_var;
1561
 
1562
/* Update register pressure after scheduling INSN.  */
1563
static void
1564
update_register_pressure (rtx insn)
1565
{
1566
  struct reg_use_data *use;
1567
  struct reg_set_data *set;
1568
 
1569
  for (use = INSN_REG_USE_LIST (insn); use != NULL; use = use->next_insn_use)
1570
    if (dying_use_p (use) && bitmap_bit_p (curr_reg_live, use->regno))
1571
      mark_regno_birth_or_death (use->regno, false);
1572
  for (set = INSN_REG_SET_LIST (insn); set != NULL; set = set->next_insn_set)
1573
    mark_regno_birth_or_death (set->regno, true);
1574
}
1575
 
1576
/* Set up or update (if UPDATE_P) max register pressure (see its
1577
   meaning in sched-int.h::_haifa_insn_data) for all current BB insns
1578
   after insn AFTER.  */
1579
static void
1580
setup_insn_max_reg_pressure (rtx after, bool update_p)
1581
{
1582
  int i, p;
1583
  bool eq_p;
1584
  rtx insn;
1585
  static int max_reg_pressure[N_REG_CLASSES];
1586
 
1587
  save_reg_pressure ();
1588
  for (i = 0; i < ira_reg_class_cover_size; i++)
1589
    max_reg_pressure[ira_reg_class_cover[i]]
1590
      = curr_reg_pressure[ira_reg_class_cover[i]];
1591
  for (insn = NEXT_INSN (after);
1592
       insn != NULL_RTX && ! BARRIER_P (insn)
1593
         && BLOCK_FOR_INSN (insn) == BLOCK_FOR_INSN (after);
1594
       insn = NEXT_INSN (insn))
1595
    if (NONDEBUG_INSN_P (insn))
1596
      {
1597
        eq_p = true;
1598
        for (i = 0; i < ira_reg_class_cover_size; i++)
1599
          {
1600
            p = max_reg_pressure[ira_reg_class_cover[i]];
1601
            if (INSN_MAX_REG_PRESSURE (insn)[i] != p)
1602
              {
1603
                eq_p = false;
1604
                INSN_MAX_REG_PRESSURE (insn)[i]
1605
                  = max_reg_pressure[ira_reg_class_cover[i]];
1606
              }
1607
          }
1608
        if (update_p && eq_p)
1609
          break;
1610
        update_register_pressure (insn);
1611
        for (i = 0; i < ira_reg_class_cover_size; i++)
1612
          if (max_reg_pressure[ira_reg_class_cover[i]]
1613
              < curr_reg_pressure[ira_reg_class_cover[i]])
1614
            max_reg_pressure[ira_reg_class_cover[i]]
1615
              = curr_reg_pressure[ira_reg_class_cover[i]];
1616
      }
1617
  restore_reg_pressure ();
1618
}
1619
 
1620
/* Update the current register pressure after scheduling INSN.  Update
1621
   also max register pressure for unscheduled insns of the current
1622
   BB.  */
1623
static void
1624
update_reg_and_insn_max_reg_pressure (rtx insn)
1625
{
1626
  int i;
1627
  int before[N_REG_CLASSES];
1628
 
1629
  for (i = 0; i < ira_reg_class_cover_size; i++)
1630
    before[i] = curr_reg_pressure[ira_reg_class_cover[i]];
1631
  update_register_pressure (insn);
1632
  for (i = 0; i < ira_reg_class_cover_size; i++)
1633
    if (curr_reg_pressure[ira_reg_class_cover[i]] != before[i])
1634
      break;
1635
  if (i < ira_reg_class_cover_size)
1636
    setup_insn_max_reg_pressure (insn, true);
1637
}
1638
 
1639
/* Set up register pressure at the beginning of basic block BB whose
1640
   insns starting after insn AFTER.  Set up also max register pressure
1641
   for all insns of the basic block.  */
1642
void
1643
sched_setup_bb_reg_pressure_info (basic_block bb, rtx after)
1644
{
1645
  gcc_assert (sched_pressure_p);
1646
  initiate_bb_reg_pressure_info (bb);
1647
  setup_insn_max_reg_pressure (after, false);
1648
}
1649
 
1650
/* INSN is the "currently executing insn".  Launch each insn which was
1651
   waiting on INSN.  READY is the ready list which contains the insns
1652
   that are ready to fire.  CLOCK is the current cycle.  The function
1653
   returns necessary cycle advance after issuing the insn (it is not
1654
   zero for insns in a schedule group).  */
1655
 
1656
static int
1657
schedule_insn (rtx insn)
1658
{
1659
  sd_iterator_def sd_it;
1660
  dep_t dep;
1661
  int i;
1662
  int advance = 0;
1663
 
1664
  if (sched_verbose >= 1)
1665
    {
1666
      struct reg_pressure_data *pressure_info;
1667
      char buf[2048];
1668
 
1669
      print_insn (buf, insn, 0);
1670
      buf[40] = 0;
1671
      fprintf (sched_dump, ";;\t%3i--> %-40s:", clock_var, buf);
1672
 
1673
      if (recog_memoized (insn) < 0)
1674
        fprintf (sched_dump, "nothing");
1675
      else
1676
        print_reservation (sched_dump, insn);
1677
      pressure_info = INSN_REG_PRESSURE (insn);
1678
      if (pressure_info != NULL)
1679
        {
1680
          fputc (':', sched_dump);
1681
          for (i = 0; i < ira_reg_class_cover_size; i++)
1682
            fprintf (sched_dump, "%s%+d(%d)",
1683
                     reg_class_names[ira_reg_class_cover[i]],
1684
                     pressure_info[i].set_increase, pressure_info[i].change);
1685
        }
1686
      fputc ('\n', sched_dump);
1687
    }
1688
 
1689
  if (sched_pressure_p)
1690
    update_reg_and_insn_max_reg_pressure (insn);
1691
 
1692
  /* Scheduling instruction should have all its dependencies resolved and
1693
     should have been removed from the ready list.  */
1694
  gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
1695
 
1696
  /* Reset debug insns invalidated by moving this insn.  */
1697
  if (MAY_HAVE_DEBUG_INSNS && !DEBUG_INSN_P (insn))
1698
    for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
1699
         sd_iterator_cond (&sd_it, &dep);)
1700
      {
1701
        rtx dbg = DEP_PRO (dep);
1702
        struct reg_use_data *use, *next;
1703
 
1704
        gcc_assert (DEBUG_INSN_P (dbg));
1705
 
1706
        if (sched_verbose >= 6)
1707
          fprintf (sched_dump, ";;\t\tresetting: debug insn %d\n",
1708
                   INSN_UID (dbg));
1709
 
1710
        /* ??? Rather than resetting the debug insn, we might be able
1711
           to emit a debug temp before the just-scheduled insn, but
1712
           this would involve checking that the expression at the
1713
           point of the debug insn is equivalent to the expression
1714
           before the just-scheduled insn.  They might not be: the
1715
           expression in the debug insn may depend on other insns not
1716
           yet scheduled that set MEMs, REGs or even other debug
1717
           insns.  It's not clear that attempting to preserve debug
1718
           information in these cases is worth the effort, given how
1719
           uncommon these resets are and the likelihood that the debug
1720
           temps introduced won't survive the schedule change.  */
1721
        INSN_VAR_LOCATION_LOC (dbg) = gen_rtx_UNKNOWN_VAR_LOC ();
1722
        df_insn_rescan (dbg);
1723
 
1724
        /* Unknown location doesn't use any registers.  */
1725
        for (use = INSN_REG_USE_LIST (dbg); use != NULL; use = next)
1726
          {
1727
            struct reg_use_data *prev = use;
1728
 
1729
            /* Remove use from the cyclic next_regno_use chain first.  */
1730
            while (prev->next_regno_use != use)
1731
              prev = prev->next_regno_use;
1732
            prev->next_regno_use = use->next_regno_use;
1733
            next = use->next_insn_use;
1734
            free (use);
1735
          }
1736
        INSN_REG_USE_LIST (dbg) = NULL;
1737
 
1738
        /* We delete rather than resolve these deps, otherwise we
1739
           crash in sched_free_deps(), because forward deps are
1740
           expected to be released before backward deps.  */
1741
        sd_delete_dep (sd_it);
1742
      }
1743
 
1744
  gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
1745
  QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
1746
 
1747
  gcc_assert (INSN_TICK (insn) >= MIN_TICK);
1748
  if (INSN_TICK (insn) > clock_var)
1749
    /* INSN has been prematurely moved from the queue to the ready list.
1750
       This is possible only if following flag is set.  */
1751
    gcc_assert (flag_sched_stalled_insns);
1752
 
1753
  /* ??? Probably, if INSN is scheduled prematurely, we should leave
1754
     INSN_TICK untouched.  This is a machine-dependent issue, actually.  */
1755
  INSN_TICK (insn) = clock_var;
1756
 
1757
  /* Update dependent instructions.  */
1758
  for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
1759
       sd_iterator_cond (&sd_it, &dep);)
1760
    {
1761
      rtx next = DEP_CON (dep);
1762
 
1763
      /* Resolve the dependence between INSN and NEXT.
1764
         sd_resolve_dep () moves current dep to another list thus
1765
         advancing the iterator.  */
1766
      sd_resolve_dep (sd_it);
1767
 
1768
      /* Don't bother trying to mark next as ready if insn is a debug
1769
         insn.  If insn is the last hard dependency, it will have
1770
         already been discounted.  */
1771
      if (DEBUG_INSN_P (insn) && !DEBUG_INSN_P (next))
1772
        continue;
1773
 
1774
      if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
1775
        {
1776
          int effective_cost;
1777
 
1778
          effective_cost = try_ready (next);
1779
 
1780
          if (effective_cost >= 0
1781
              && SCHED_GROUP_P (next)
1782
              && advance < effective_cost)
1783
            advance = effective_cost;
1784
        }
1785
      else
1786
        /* Check always has only one forward dependence (to the first insn in
1787
           the recovery block), therefore, this will be executed only once.  */
1788
        {
1789
          gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
1790
          fix_recovery_deps (RECOVERY_BLOCK (insn));
1791
        }
1792
    }
1793
 
1794
  /* This is the place where scheduler doesn't *basically* need backward and
1795
     forward dependencies for INSN anymore.  Nevertheless they are used in
1796
     heuristics in rank_for_schedule (), early_queue_to_ready () and in
1797
     some targets (e.g. rs6000).  Thus the earliest place where we *can*
1798
     remove dependencies is after targetm.sched.md_finish () call in
1799
     schedule_block ().  But, on the other side, the safest place to remove
1800
     dependencies is when we are finishing scheduling entire region.  As we
1801
     don't generate [many] dependencies during scheduling itself, we won't
1802
     need memory until beginning of next region.
1803
     Bottom line: Dependencies are removed for all insns in the end of
1804
     scheduling the region.  */
1805
 
1806
  /* Annotate the instruction with issue information -- TImode
1807
     indicates that the instruction is expected not to be able
1808
     to issue on the same cycle as the previous insn.  A machine
1809
     may use this information to decide how the instruction should
1810
     be aligned.  */
1811
  if (issue_rate > 1
1812
      && GET_CODE (PATTERN (insn)) != USE
1813
      && GET_CODE (PATTERN (insn)) != CLOBBER
1814
      && !DEBUG_INSN_P (insn))
1815
    {
1816
      if (reload_completed)
1817
        PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
1818
      last_clock_var = clock_var;
1819
    }
1820
 
1821
  return advance;
1822
}
1823
 
1824
/* Functions for handling of notes.  */
1825
 
1826
/* Add note list that ends on FROM_END to the end of TO_ENDP.  */
1827
void
1828
concat_note_lists (rtx from_end, rtx *to_endp)
1829
{
1830
  rtx from_start;
1831
 
1832
  /* It's easy when have nothing to concat.  */
1833
  if (from_end == NULL)
1834
    return;
1835
 
1836
  /* It's also easy when destination is empty.  */
1837
  if (*to_endp == NULL)
1838
    {
1839
      *to_endp = from_end;
1840
      return;
1841
    }
1842
 
1843
  from_start = from_end;
1844
  while (PREV_INSN (from_start) != NULL)
1845
    from_start = PREV_INSN (from_start);
1846
 
1847
  PREV_INSN (from_start) = *to_endp;
1848
  NEXT_INSN (*to_endp) = from_start;
1849
  *to_endp = from_end;
1850
}
1851
 
1852
/* Delete notes between HEAD and TAIL and put them in the chain
1853
   of notes ended by NOTE_LIST.  */
1854
void
1855
remove_notes (rtx head, rtx tail)
1856
{
1857
  rtx next_tail, insn, next;
1858
 
1859
  note_list = 0;
1860
  if (head == tail && !INSN_P (head))
1861
    return;
1862
 
1863
  next_tail = NEXT_INSN (tail);
1864
  for (insn = head; insn != next_tail; insn = next)
1865
    {
1866
      next = NEXT_INSN (insn);
1867
      if (!NOTE_P (insn))
1868
        continue;
1869
 
1870
      switch (NOTE_KIND (insn))
1871
        {
1872
        case NOTE_INSN_BASIC_BLOCK:
1873
          continue;
1874
 
1875
        case NOTE_INSN_EPILOGUE_BEG:
1876
          if (insn != tail)
1877
            {
1878
              remove_insn (insn);
1879
              add_reg_note (next, REG_SAVE_NOTE,
1880
                            GEN_INT (NOTE_INSN_EPILOGUE_BEG));
1881
              break;
1882
            }
1883
          /* FALLTHRU */
1884
 
1885
        default:
1886
          remove_insn (insn);
1887
 
1888
          /* Add the note to list that ends at NOTE_LIST.  */
1889
          PREV_INSN (insn) = note_list;
1890
          NEXT_INSN (insn) = NULL_RTX;
1891
          if (note_list)
1892
            NEXT_INSN (note_list) = insn;
1893
          note_list = insn;
1894
          break;
1895
        }
1896
 
1897
      gcc_assert ((sel_sched_p () || insn != tail) && insn != head);
1898
    }
1899
}
1900
 
1901
 
1902
/* Return the head and tail pointers of ebb starting at BEG and ending
1903
   at END.  */
1904
void
1905
get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
1906
{
1907
  rtx beg_head = BB_HEAD (beg);
1908
  rtx beg_tail = BB_END (beg);
1909
  rtx end_head = BB_HEAD (end);
1910
  rtx end_tail = BB_END (end);
1911
 
1912
  /* Don't include any notes or labels at the beginning of the BEG
1913
     basic block, or notes at the end of the END basic blocks.  */
1914
 
1915
  if (LABEL_P (beg_head))
1916
    beg_head = NEXT_INSN (beg_head);
1917
 
1918
  while (beg_head != beg_tail)
1919
    if (NOTE_P (beg_head) || BOUNDARY_DEBUG_INSN_P (beg_head))
1920
      beg_head = NEXT_INSN (beg_head);
1921
    else
1922
      break;
1923
 
1924
  *headp = beg_head;
1925
 
1926
  if (beg == end)
1927
    end_head = beg_head;
1928
  else if (LABEL_P (end_head))
1929
    end_head = NEXT_INSN (end_head);
1930
 
1931
  while (end_head != end_tail)
1932
    if (NOTE_P (end_tail) || BOUNDARY_DEBUG_INSN_P (end_tail))
1933
      end_tail = PREV_INSN (end_tail);
1934
    else
1935
      break;
1936
 
1937
  *tailp = end_tail;
1938
}
1939
 
1940
/* Return nonzero if there are no real insns in the range [ HEAD, TAIL ].  */
1941
 
1942
int
1943
no_real_insns_p (const_rtx head, const_rtx tail)
1944
{
1945
  while (head != NEXT_INSN (tail))
1946
    {
1947
      if (!NOTE_P (head) && !LABEL_P (head)
1948
          && !BOUNDARY_DEBUG_INSN_P (head))
1949
        return 0;
1950
      head = NEXT_INSN (head);
1951
    }
1952
  return 1;
1953
}
1954
 
1955
/* Restore-other-notes: NOTE_LIST is the end of a chain of notes
1956
   previously found among the insns.  Insert them just before HEAD.  */
1957
rtx
1958
restore_other_notes (rtx head, basic_block head_bb)
1959
{
1960
  if (note_list != 0)
1961
    {
1962
      rtx note_head = note_list;
1963
 
1964
      if (head)
1965
        head_bb = BLOCK_FOR_INSN (head);
1966
      else
1967
        head = NEXT_INSN (bb_note (head_bb));
1968
 
1969
      while (PREV_INSN (note_head))
1970
        {
1971
          set_block_for_insn (note_head, head_bb);
1972
          note_head = PREV_INSN (note_head);
1973
        }
1974
      /* In the above cycle we've missed this note.  */
1975
      set_block_for_insn (note_head, head_bb);
1976
 
1977
      PREV_INSN (note_head) = PREV_INSN (head);
1978
      NEXT_INSN (PREV_INSN (head)) = note_head;
1979
      PREV_INSN (head) = note_list;
1980
      NEXT_INSN (note_list) = head;
1981
 
1982
      if (BLOCK_FOR_INSN (head) != head_bb)
1983
        BB_END (head_bb) = note_list;
1984
 
1985
      head = note_head;
1986
    }
1987
 
1988
  return head;
1989
}
1990
 
1991
/* Move insns that became ready to fire from queue to ready list.  */
1992
 
1993
static void
1994
queue_to_ready (struct ready_list *ready)
1995
{
1996
  rtx insn;
1997
  rtx link;
1998
  rtx skip_insn;
1999
 
2000
  q_ptr = NEXT_Q (q_ptr);
2001
 
2002
  if (dbg_cnt (sched_insn) == false)
2003
    {
2004
      /* If debug counter is activated do not requeue insn next after
2005
         last_scheduled_insn.  */
2006
      skip_insn = next_nonnote_insn (last_scheduled_insn);
2007
      while (skip_insn && DEBUG_INSN_P (skip_insn))
2008
        skip_insn = next_nonnote_insn (skip_insn);
2009
    }
2010
  else
2011
    skip_insn = NULL_RTX;
2012
 
2013
  /* Add all pending insns that can be scheduled without stalls to the
2014
     ready list.  */
2015
  for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
2016
    {
2017
      insn = XEXP (link, 0);
2018
      q_size -= 1;
2019
 
2020
      if (sched_verbose >= 2)
2021
        fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2022
                 (*current_sched_info->print_insn) (insn, 0));
2023
 
2024
      /* If the ready list is full, delay the insn for 1 cycle.
2025
         See the comment in schedule_block for the rationale.  */
2026
      if (!reload_completed
2027
          && ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS
2028
          && !SCHED_GROUP_P (insn)
2029
          && insn != skip_insn)
2030
        {
2031
          if (sched_verbose >= 2)
2032
            fprintf (sched_dump, "requeued because ready full\n");
2033
          queue_insn (insn, 1);
2034
        }
2035
      else
2036
        {
2037
          ready_add (ready, insn, false);
2038
          if (sched_verbose >= 2)
2039
            fprintf (sched_dump, "moving to ready without stalls\n");
2040
        }
2041
    }
2042
  free_INSN_LIST_list (&insn_queue[q_ptr]);
2043
 
2044
  /* If there are no ready insns, stall until one is ready and add all
2045
     of the pending insns at that point to the ready list.  */
2046
  if (ready->n_ready == 0)
2047
    {
2048
      int stalls;
2049
 
2050
      for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
2051
        {
2052
          if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2053
            {
2054
              for (; link; link = XEXP (link, 1))
2055
                {
2056
                  insn = XEXP (link, 0);
2057
                  q_size -= 1;
2058
 
2059
                  if (sched_verbose >= 2)
2060
                    fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
2061
                             (*current_sched_info->print_insn) (insn, 0));
2062
 
2063
                  ready_add (ready, insn, false);
2064
                  if (sched_verbose >= 2)
2065
                    fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
2066
                }
2067
              free_INSN_LIST_list (&insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]);
2068
 
2069
              advance_one_cycle ();
2070
 
2071
              break;
2072
            }
2073
 
2074
          advance_one_cycle ();
2075
        }
2076
 
2077
      q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
2078
      clock_var += stalls;
2079
    }
2080
}
2081
 
2082
/* Used by early_queue_to_ready.  Determines whether it is "ok" to
2083
   prematurely move INSN from the queue to the ready list.  Currently,
2084
   if a target defines the hook 'is_costly_dependence', this function
2085
   uses the hook to check whether there exist any dependences which are
2086
   considered costly by the target, between INSN and other insns that
2087
   have already been scheduled.  Dependences are checked up to Y cycles
2088
   back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
2089
   controlling this value.
2090
   (Other considerations could be taken into account instead (or in
2091
   addition) depending on user flags and target hooks.  */
2092
 
2093
static bool
2094
ok_for_early_queue_removal (rtx insn)
2095
{
2096
  int n_cycles;
2097
  rtx prev_insn = last_scheduled_insn;
2098
 
2099
  if (targetm.sched.is_costly_dependence)
2100
    {
2101
      for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
2102
        {
2103
          for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
2104
            {
2105
              int cost;
2106
 
2107
              if (prev_insn == current_sched_info->prev_head)
2108
                {
2109
                  prev_insn = NULL;
2110
                  break;
2111
                }
2112
 
2113
              if (!NOTE_P (prev_insn))
2114
                {
2115
                  dep_t dep;
2116
 
2117
                  dep = sd_find_dep_between (prev_insn, insn, true);
2118
 
2119
                  if (dep != NULL)
2120
                    {
2121
                      cost = dep_cost (dep);
2122
 
2123
                      if (targetm.sched.is_costly_dependence (dep, cost,
2124
                                flag_sched_stalled_insns_dep - n_cycles))
2125
                        return false;
2126
                    }
2127
                }
2128
 
2129
              if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
2130
                break;
2131
            }
2132
 
2133
          if (!prev_insn)
2134
            break;
2135
          prev_insn = PREV_INSN (prev_insn);
2136
        }
2137
    }
2138
 
2139
  return true;
2140
}
2141
 
2142
 
2143
/* Remove insns from the queue, before they become "ready" with respect
2144
   to FU latency considerations.  */
2145
 
2146
static int
2147
early_queue_to_ready (state_t state, struct ready_list *ready)
2148
{
2149
  rtx insn;
2150
  rtx link;
2151
  rtx next_link;
2152
  rtx prev_link;
2153
  bool move_to_ready;
2154
  int cost;
2155
  state_t temp_state = alloca (dfa_state_size);
2156
  int stalls;
2157
  int insns_removed = 0;
2158
 
2159
  /*
2160
     Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
2161
     function:
2162
 
2163
     X == 0: There is no limit on how many queued insns can be removed
2164
             prematurely.  (flag_sched_stalled_insns = -1).
2165
 
2166
     X >= 1: Only X queued insns can be removed prematurely in each
2167
             invocation.  (flag_sched_stalled_insns = X).
2168
 
2169
     Otherwise: Early queue removal is disabled.
2170
         (flag_sched_stalled_insns = 0)
2171
  */
2172
 
2173
  if (! flag_sched_stalled_insns)
2174
    return 0;
2175
 
2176
  for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
2177
    {
2178
      if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
2179
        {
2180
          if (sched_verbose > 6)
2181
            fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
2182
 
2183
          prev_link = 0;
2184
          while (link)
2185
            {
2186
              next_link = XEXP (link, 1);
2187
              insn = XEXP (link, 0);
2188
              if (insn && sched_verbose > 6)
2189
                print_rtl_single (sched_dump, insn);
2190
 
2191
              memcpy (temp_state, state, dfa_state_size);
2192
              if (recog_memoized (insn) < 0)
2193
                /* non-negative to indicate that it's not ready
2194
                   to avoid infinite Q->R->Q->R... */
2195
                cost = 0;
2196
              else
2197
                cost = state_transition (temp_state, insn);
2198
 
2199
              if (sched_verbose >= 6)
2200
                fprintf (sched_dump, "transition cost = %d\n", cost);
2201
 
2202
              move_to_ready = false;
2203
              if (cost < 0)
2204
                {
2205
                  move_to_ready = ok_for_early_queue_removal (insn);
2206
                  if (move_to_ready == true)
2207
                    {
2208
                      /* move from Q to R */
2209
                      q_size -= 1;
2210
                      ready_add (ready, insn, false);
2211
 
2212
                      if (prev_link)
2213
                        XEXP (prev_link, 1) = next_link;
2214
                      else
2215
                        insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
2216
 
2217
                      free_INSN_LIST_node (link);
2218
 
2219
                      if (sched_verbose >= 2)
2220
                        fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
2221
                                 (*current_sched_info->print_insn) (insn, 0));
2222
 
2223
                      insns_removed++;
2224
                      if (insns_removed == flag_sched_stalled_insns)
2225
                        /* Remove no more than flag_sched_stalled_insns insns
2226
                           from Q at a time.  */
2227
                        return insns_removed;
2228
                    }
2229
                }
2230
 
2231
              if (move_to_ready == false)
2232
                prev_link = link;
2233
 
2234
              link = next_link;
2235
            } /* while link */
2236
        } /* if link */
2237
 
2238
    } /* for stalls.. */
2239
 
2240
  return insns_removed;
2241
}
2242
 
2243
 
2244
/* Print the ready list for debugging purposes.  Callable from debugger.  */
2245
 
2246
static void
2247
debug_ready_list (struct ready_list *ready)
2248
{
2249
  rtx *p;
2250
  int i;
2251
 
2252
  if (ready->n_ready == 0)
2253
    {
2254
      fprintf (sched_dump, "\n");
2255
      return;
2256
    }
2257
 
2258
  p = ready_lastpos (ready);
2259
  for (i = 0; i < ready->n_ready; i++)
2260
    {
2261
      fprintf (sched_dump, "  %s:%d",
2262
               (*current_sched_info->print_insn) (p[i], 0),
2263
               INSN_LUID (p[i]));
2264
      if (sched_pressure_p)
2265
        fprintf (sched_dump, "(cost=%d",
2266
                 INSN_REG_PRESSURE_EXCESS_COST_CHANGE (p[i]));
2267
      if (INSN_TICK (p[i]) > clock_var)
2268
        fprintf (sched_dump, ":delay=%d", INSN_TICK (p[i]) - clock_var);
2269
      if (sched_pressure_p)
2270
        fprintf (sched_dump, ")");
2271
    }
2272
  fprintf (sched_dump, "\n");
2273
}
2274
 
2275
/* Search INSN for REG_SAVE_NOTE notes and convert them back into insn
2276
   NOTEs.  This is used for NOTE_INSN_EPILOGUE_BEG, so that sched-ebb
2277
   replaces the epilogue note in the correct basic block.  */
2278
void
2279
reemit_notes (rtx insn)
2280
{
2281
  rtx note, last = insn;
2282
 
2283
  for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2284
    {
2285
      if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
2286
        {
2287
          enum insn_note note_type = (enum insn_note) INTVAL (XEXP (note, 0));
2288
 
2289
          last = emit_note_before (note_type, last);
2290
          remove_note (insn, note);
2291
        }
2292
    }
2293
}
2294
 
2295
/* Move INSN.  Reemit notes if needed.  Update CFG, if needed.  */
2296
static void
2297
move_insn (rtx insn, rtx last, rtx nt)
2298
{
2299
  if (PREV_INSN (insn) != last)
2300
    {
2301
      basic_block bb;
2302
      rtx note;
2303
      int jump_p = 0;
2304
 
2305
      bb = BLOCK_FOR_INSN (insn);
2306
 
2307
      /* BB_HEAD is either LABEL or NOTE.  */
2308
      gcc_assert (BB_HEAD (bb) != insn);
2309
 
2310
      if (BB_END (bb) == insn)
2311
        /* If this is last instruction in BB, move end marker one
2312
           instruction up.  */
2313
        {
2314
          /* Jumps are always placed at the end of basic block.  */
2315
          jump_p = control_flow_insn_p (insn);
2316
 
2317
          gcc_assert (!jump_p
2318
                      || ((common_sched_info->sched_pass_id == SCHED_RGN_PASS)
2319
                          && IS_SPECULATION_BRANCHY_CHECK_P (insn))
2320
                      || (common_sched_info->sched_pass_id
2321
                          == SCHED_EBB_PASS));
2322
 
2323
          gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);
2324
 
2325
          BB_END (bb) = PREV_INSN (insn);
2326
        }
2327
 
2328
      gcc_assert (BB_END (bb) != last);
2329
 
2330
      if (jump_p)
2331
        /* We move the block note along with jump.  */
2332
        {
2333
          gcc_assert (nt);
2334
 
2335
          note = NEXT_INSN (insn);
2336
          while (NOTE_NOT_BB_P (note) && note != nt)
2337
            note = NEXT_INSN (note);
2338
 
2339
          if (note != nt
2340
              && (LABEL_P (note)
2341
                  || BARRIER_P (note)))
2342
            note = NEXT_INSN (note);
2343
 
2344
          gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
2345
        }
2346
      else
2347
        note = insn;
2348
 
2349
      NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (note);
2350
      PREV_INSN (NEXT_INSN (note)) = PREV_INSN (insn);
2351
 
2352
      NEXT_INSN (note) = NEXT_INSN (last);
2353
      PREV_INSN (NEXT_INSN (last)) = note;
2354
 
2355
      NEXT_INSN (last) = insn;
2356
      PREV_INSN (insn) = last;
2357
 
2358
      bb = BLOCK_FOR_INSN (last);
2359
 
2360
      if (jump_p)
2361
        {
2362
          fix_jump_move (insn);
2363
 
2364
          if (BLOCK_FOR_INSN (insn) != bb)
2365
            move_block_after_check (insn);
2366
 
2367
          gcc_assert (BB_END (bb) == last);
2368
        }
2369
 
2370
      df_insn_change_bb (insn, bb);
2371
 
2372
      /* Update BB_END, if needed.  */
2373
      if (BB_END (bb) == last)
2374
        BB_END (bb) = insn;
2375
    }
2376
 
2377
  SCHED_GROUP_P (insn) = 0;
2378
}
2379
 
2380
/* Return true if scheduling INSN will finish current clock cycle.  */
2381
static bool
2382
insn_finishes_cycle_p (rtx insn)
2383
{
2384
  if (SCHED_GROUP_P (insn))
2385
    /* After issuing INSN, rest of the sched_group will be forced to issue
2386
       in order.  Don't make any plans for the rest of cycle.  */
2387
    return true;
2388
 
2389
  /* Finishing the block will, apparently, finish the cycle.  */
2390
  if (current_sched_info->insn_finishes_block_p
2391
      && current_sched_info->insn_finishes_block_p (insn))
2392
    return true;
2393
 
2394
  return false;
2395
}
2396
 
2397
/* The following structure describe an entry of the stack of choices.  */
2398
struct choice_entry
2399
{
2400
  /* Ordinal number of the issued insn in the ready queue.  */
2401
  int index;
2402
  /* The number of the rest insns whose issues we should try.  */
2403
  int rest;
2404
  /* The number of issued essential insns.  */
2405
  int n;
2406
  /* State after issuing the insn.  */
2407
  state_t state;
2408
};
2409
 
2410
/* The following array is used to implement a stack of choices used in
2411
   function max_issue.  */
2412
static struct choice_entry *choice_stack;
2413
 
2414
/* The following variable value is number of essential insns issued on
2415
   the current cycle.  An insn is essential one if it changes the
2416
   processors state.  */
2417
int cycle_issued_insns;
2418
 
2419
/* This holds the value of the target dfa_lookahead hook.  */
2420
int dfa_lookahead;
2421
 
2422
/* The following variable value is maximal number of tries of issuing
2423
   insns for the first cycle multipass insn scheduling.  We define
2424
   this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE).  We would not
2425
   need this constraint if all real insns (with non-negative codes)
2426
   had reservations because in this case the algorithm complexity is
2427
   O(DFA_LOOKAHEAD**ISSUE_RATE).  Unfortunately, the dfa descriptions
2428
   might be incomplete and such insn might occur.  For such
2429
   descriptions, the complexity of algorithm (without the constraint)
2430
   could achieve DFA_LOOKAHEAD ** N , where N is the queue length.  */
2431
static int max_lookahead_tries;
2432
 
2433
/* The following value is value of hook
2434
   `first_cycle_multipass_dfa_lookahead' at the last call of
2435
   `max_issue'.  */
2436
static int cached_first_cycle_multipass_dfa_lookahead = 0;
2437
 
2438
/* The following value is value of `issue_rate' at the last call of
2439
   `sched_init'.  */
2440
static int cached_issue_rate = 0;
2441
 
2442
/* The following function returns maximal (or close to maximal) number
2443
   of insns which can be issued on the same cycle and one of which
2444
   insns is insns with the best rank (the first insn in READY).  To
2445
   make this function tries different samples of ready insns.  READY
2446
   is current queue `ready'.  Global array READY_TRY reflects what
2447
   insns are already issued in this try.  MAX_POINTS is the sum of points
2448
   of all instructions in READY.  The function stops immediately,
2449
   if it reached the such a solution, that all instruction can be issued.
2450
   INDEX will contain index of the best insn in READY.  The following
2451
   function is used only for first cycle multipass scheduling.
2452
 
2453
   PRIVILEGED_N >= 0
2454
 
2455
   This function expects recognized insns only.  All USEs,
2456
   CLOBBERs, etc must be filtered elsewhere.  */
2457
int
2458
max_issue (struct ready_list *ready, int privileged_n, state_t state,
2459
           int *index)
2460
{
2461
  int n, i, all, n_ready, best, delay, tries_num, max_points;
2462
  int more_issue;
2463
  struct choice_entry *top;
2464
  rtx insn;
2465
 
2466
  n_ready = ready->n_ready;
2467
  gcc_assert (dfa_lookahead >= 1 && privileged_n >= 0
2468
              && privileged_n <= n_ready);
2469
 
2470
  /* Init MAX_LOOKAHEAD_TRIES.  */
2471
  if (cached_first_cycle_multipass_dfa_lookahead != dfa_lookahead)
2472
    {
2473
      cached_first_cycle_multipass_dfa_lookahead = dfa_lookahead;
2474
      max_lookahead_tries = 100;
2475
      for (i = 0; i < issue_rate; i++)
2476
        max_lookahead_tries *= dfa_lookahead;
2477
    }
2478
 
2479
  /* Init max_points.  */
2480
  max_points = 0;
2481
  more_issue = issue_rate - cycle_issued_insns;
2482
 
2483
  /* ??? We used to assert here that we never issue more insns than issue_rate.
2484
     However, some targets (e.g. MIPS/SB1) claim lower issue rate than can be
2485
     achieved to get better performance.  Until these targets are fixed to use
2486
     scheduler hooks to manipulate insns priority instead, the assert should
2487
     be disabled.
2488
 
2489
     gcc_assert (more_issue >= 0);  */
2490
 
2491
  for (i = 0; i < n_ready; i++)
2492
    if (!ready_try [i])
2493
      {
2494
        if (more_issue-- > 0)
2495
          max_points += ISSUE_POINTS (ready_element (ready, i));
2496
        else
2497
          break;
2498
      }
2499
 
2500
  /* The number of the issued insns in the best solution.  */
2501
  best = 0;
2502
 
2503
  top = choice_stack;
2504
 
2505
  /* Set initial state of the search.  */
2506
  memcpy (top->state, state, dfa_state_size);
2507
  top->rest = dfa_lookahead;
2508
  top->n = 0;
2509
 
2510
  /* Count the number of the insns to search among.  */
2511
  for (all = i = 0; i < n_ready; i++)
2512
    if (!ready_try [i])
2513
      all++;
2514
 
2515
  /* I is the index of the insn to try next.  */
2516
  i = 0;
2517
  tries_num = 0;
2518
  for (;;)
2519
    {
2520
      if (/* If we've reached a dead end or searched enough of what we have
2521
             been asked...  */
2522
          top->rest == 0
2523
          /* Or have nothing else to try.  */
2524
          || i >= n_ready)
2525
        {
2526
          /* ??? (... || i == n_ready).  */
2527
          gcc_assert (i <= n_ready);
2528
 
2529
          if (top == choice_stack)
2530
            break;
2531
 
2532
          if (best < top - choice_stack)
2533
            {
2534
              if (privileged_n)
2535
                {
2536
                  n = privileged_n;
2537
                  /* Try to find issued privileged insn.  */
2538
                  while (n && !ready_try[--n]);
2539
                }
2540
 
2541
              if (/* If all insns are equally good...  */
2542
                  privileged_n == 0
2543
                  /* Or a privileged insn will be issued.  */
2544
                  || ready_try[n])
2545
                /* Then we have a solution.  */
2546
                {
2547
                  best = top - choice_stack;
2548
                  /* This is the index of the insn issued first in this
2549
                     solution.  */
2550
                  *index = choice_stack [1].index;
2551
                  if (top->n == max_points || best == all)
2552
                    break;
2553
                }
2554
            }
2555
 
2556
          /* Set ready-list index to point to the last insn
2557
             ('i++' below will advance it to the next insn).  */
2558
          i = top->index;
2559
 
2560
          /* Backtrack.  */
2561
          ready_try [i] = 0;
2562
          top--;
2563
          memcpy (state, top->state, dfa_state_size);
2564
        }
2565
      else if (!ready_try [i])
2566
        {
2567
          tries_num++;
2568
          if (tries_num > max_lookahead_tries)
2569
            break;
2570
          insn = ready_element (ready, i);
2571
          delay = state_transition (state, insn);
2572
          if (delay < 0)
2573
            {
2574
              if (state_dead_lock_p (state)
2575
                  || insn_finishes_cycle_p (insn))
2576
                /* We won't issue any more instructions in the next
2577
                   choice_state.  */
2578
                top->rest = 0;
2579
              else
2580
                top->rest--;
2581
 
2582
              n = top->n;
2583
              if (memcmp (top->state, state, dfa_state_size) != 0)
2584
                n += ISSUE_POINTS (insn);
2585
 
2586
              /* Advance to the next choice_entry.  */
2587
              top++;
2588
              /* Initialize it.  */
2589
              top->rest = dfa_lookahead;
2590
              top->index = i;
2591
              top->n = n;
2592
              memcpy (top->state, state, dfa_state_size);
2593
 
2594
              ready_try [i] = 1;
2595
              i = -1;
2596
            }
2597
        }
2598
 
2599
      /* Increase ready-list index.  */
2600
      i++;
2601
    }
2602
 
2603
  /* Restore the original state of the DFA.  */
2604
  memcpy (state, choice_stack->state, dfa_state_size);
2605
 
2606
  return best;
2607
}
2608
 
2609
/* The following function chooses insn from READY and modifies
2610
   READY.  The following function is used only for first
2611
   cycle multipass scheduling.
2612
   Return:
2613
   -1 if cycle should be advanced,
2614
 
2615
   1 if choose_ready () should be restarted without advancing the cycle.  */
2616
static int
2617
choose_ready (struct ready_list *ready, rtx *insn_ptr)
2618
{
2619
  int lookahead;
2620
 
2621
  if (dbg_cnt (sched_insn) == false)
2622
    {
2623
      rtx insn;
2624
 
2625
      insn = next_nonnote_insn (last_scheduled_insn);
2626
 
2627
      if (QUEUE_INDEX (insn) == QUEUE_READY)
2628
        /* INSN is in the ready_list.  */
2629
        {
2630
          ready_remove_insn (insn);
2631
          *insn_ptr = insn;
2632
          return 0;
2633
        }
2634
 
2635
      /* INSN is in the queue.  Advance cycle to move it to the ready list.  */
2636
      return -1;
2637
    }
2638
 
2639
  lookahead = 0;
2640
 
2641
  if (targetm.sched.first_cycle_multipass_dfa_lookahead)
2642
    lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
2643
  if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))
2644
      || DEBUG_INSN_P (ready_element (ready, 0)))
2645
    {
2646
      *insn_ptr = ready_remove_first (ready);
2647
      return 0;
2648
    }
2649
  else
2650
    {
2651
      /* Try to choose the better insn.  */
2652
      int index = 0, i, n;
2653
      rtx insn;
2654
      int try_data = 1, try_control = 1;
2655
      ds_t ts;
2656
 
2657
      insn = ready_element (ready, 0);
2658
      if (INSN_CODE (insn) < 0)
2659
        {
2660
          *insn_ptr = ready_remove_first (ready);
2661
          return 0;
2662
        }
2663
 
2664
      if (spec_info
2665
          && spec_info->flags & (PREFER_NON_DATA_SPEC
2666
                                 | PREFER_NON_CONTROL_SPEC))
2667
        {
2668
          for (i = 0, n = ready->n_ready; i < n; i++)
2669
            {
2670
              rtx x;
2671
              ds_t s;
2672
 
2673
              x = ready_element (ready, i);
2674
              s = TODO_SPEC (x);
2675
 
2676
              if (spec_info->flags & PREFER_NON_DATA_SPEC
2677
                  && !(s & DATA_SPEC))
2678
                {
2679
                  try_data = 0;
2680
                  if (!(spec_info->flags & PREFER_NON_CONTROL_SPEC)
2681
                      || !try_control)
2682
                    break;
2683
                }
2684
 
2685
              if (spec_info->flags & PREFER_NON_CONTROL_SPEC
2686
                  && !(s & CONTROL_SPEC))
2687
                {
2688
                  try_control = 0;
2689
                  if (!(spec_info->flags & PREFER_NON_DATA_SPEC) || !try_data)
2690
                    break;
2691
                }
2692
            }
2693
        }
2694
 
2695
      ts = TODO_SPEC (insn);
2696
      if ((ts & SPECULATIVE)
2697
          && (((!try_data && (ts & DATA_SPEC))
2698
               || (!try_control && (ts & CONTROL_SPEC)))
2699
              || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
2700
                  && !targetm.sched
2701
                  .first_cycle_multipass_dfa_lookahead_guard_spec (insn))))
2702
        /* Discard speculative instruction that stands first in the ready
2703
           list.  */
2704
        {
2705
          change_queue_index (insn, 1);
2706
          return 1;
2707
        }
2708
 
2709
      ready_try[0] = 0;
2710
 
2711
      for (i = 1; i < ready->n_ready; i++)
2712
        {
2713
          insn = ready_element (ready, i);
2714
 
2715
          ready_try [i]
2716
            = ((!try_data && (TODO_SPEC (insn) & DATA_SPEC))
2717
               || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC)));
2718
        }
2719
 
2720
      /* Let the target filter the search space.  */
2721
      for (i = 1; i < ready->n_ready; i++)
2722
        if (!ready_try[i])
2723
          {
2724
            insn = ready_element (ready, i);
2725
 
2726
#ifdef ENABLE_CHECKING
2727
            /* If this insn is recognizable we should have already
2728
               recognized it earlier.
2729
               ??? Not very clear where this is supposed to be done.
2730
               See dep_cost_1.  */
2731
            gcc_assert (INSN_CODE (insn) >= 0
2732
                        || recog_memoized (insn) < 0);
2733
#endif
2734
 
2735
            ready_try [i]
2736
              = (/* INSN_CODE check can be omitted here as it is also done later
2737
                    in max_issue ().  */
2738
                 INSN_CODE (insn) < 0
2739
                 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2740
                     && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2741
                     (insn)));
2742
          }
2743
 
2744
      if (max_issue (ready, 1, curr_state, &index) == 0)
2745
        {
2746
          *insn_ptr = ready_remove_first (ready);
2747
          if (sched_verbose >= 4)
2748
            fprintf (sched_dump, ";;\t\tChosen insn (but can't issue) : %s \n",
2749
                     (*current_sched_info->print_insn) (*insn_ptr, 0));
2750
          return 0;
2751
        }
2752
      else
2753
        {
2754
          if (sched_verbose >= 4)
2755
            fprintf (sched_dump, ";;\t\tChosen insn : %s\n",
2756
                     (*current_sched_info->print_insn)
2757
                     (ready_element (ready, index), 0));
2758
 
2759
          *insn_ptr = ready_remove (ready, index);
2760
          return 0;
2761
        }
2762
    }
2763
}
2764
 
2765
/* Use forward list scheduling to rearrange insns of block pointed to by
2766
   TARGET_BB, possibly bringing insns from subsequent blocks in the same
2767
   region.  */
2768
 
2769
void
2770
schedule_block (basic_block *target_bb)
2771
{
2772
  int i, first_cycle_insn_p;
2773
  int can_issue_more;
2774
  state_t temp_state = NULL;  /* It is used for multipass scheduling.  */
2775
  int sort_p, advance, start_clock_var;
2776
 
2777
  /* Head/tail info for this block.  */
2778
  rtx prev_head = current_sched_info->prev_head;
2779
  rtx next_tail = current_sched_info->next_tail;
2780
  rtx head = NEXT_INSN (prev_head);
2781
  rtx tail = PREV_INSN (next_tail);
2782
 
2783
  /* We used to have code to avoid getting parameters moved from hard
2784
     argument registers into pseudos.
2785
 
2786
     However, it was removed when it proved to be of marginal benefit
2787
     and caused problems because schedule_block and compute_forward_dependences
2788
     had different notions of what the "head" insn was.  */
2789
 
2790
  gcc_assert (head != tail || INSN_P (head));
2791
 
2792
  haifa_recovery_bb_recently_added_p = false;
2793
 
2794
  /* Debug info.  */
2795
  if (sched_verbose)
2796
    dump_new_block_header (0, *target_bb, head, tail);
2797
 
2798
  state_reset (curr_state);
2799
 
2800
  /* Clear the ready list.  */
2801
  ready.first = ready.veclen - 1;
2802
  ready.n_ready = 0;
2803
  ready.n_debug = 0;
2804
 
2805
  /* It is used for first cycle multipass scheduling.  */
2806
  temp_state = alloca (dfa_state_size);
2807
 
2808
  if (targetm.sched.md_init)
2809
    targetm.sched.md_init (sched_dump, sched_verbose, ready.veclen);
2810
 
2811
  /* We start inserting insns after PREV_HEAD.  */
2812
  last_scheduled_insn = prev_head;
2813
 
2814
  gcc_assert ((NOTE_P (last_scheduled_insn)
2815
               || BOUNDARY_DEBUG_INSN_P (last_scheduled_insn))
2816
              && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
2817
 
2818
  /* Initialize INSN_QUEUE.  Q_SIZE is the total number of insns in the
2819
     queue.  */
2820
  q_ptr = 0;
2821
  q_size = 0;
2822
 
2823
  insn_queue = XALLOCAVEC (rtx, max_insn_queue_index + 1);
2824
  memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
2825
 
2826
  /* Start just before the beginning of time.  */
2827
  clock_var = -1;
2828
 
2829
  /* We need queue and ready lists and clock_var be initialized
2830
     in try_ready () (which is called through init_ready_list ()).  */
2831
  (*current_sched_info->init_ready_list) ();
2832
 
2833
  /* The algorithm is O(n^2) in the number of ready insns at any given
2834
     time in the worst case.  Before reload we are more likely to have
2835
     big lists so truncate them to a reasonable size.  */
2836
  if (!reload_completed
2837
      && ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS)
2838
    {
2839
      ready_sort (&ready);
2840
 
2841
      /* Find first free-standing insn past MAX_SCHED_READY_INSNS.
2842
         If there are debug insns, we know they're first.  */
2843
      for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++)
2844
        if (!SCHED_GROUP_P (ready_element (&ready, i)))
2845
          break;
2846
 
2847
      if (sched_verbose >= 2)
2848
        {
2849
          fprintf (sched_dump,
2850
                   ";;\t\tReady list on entry: %d insns\n", ready.n_ready);
2851
          fprintf (sched_dump,
2852
                   ";;\t\t before reload => truncated to %d insns\n", i);
2853
        }
2854
 
2855
      /* Delay all insns past it for 1 cycle.  If debug counter is
2856
         activated make an exception for the insn right after
2857
         last_scheduled_insn.  */
2858
      {
2859
        rtx skip_insn;
2860
 
2861
        if (dbg_cnt (sched_insn) == false)
2862
          skip_insn = next_nonnote_insn (last_scheduled_insn);
2863
        else
2864
          skip_insn = NULL_RTX;
2865
 
2866
        while (i < ready.n_ready)
2867
          {
2868
            rtx insn;
2869
 
2870
            insn = ready_remove (&ready, i);
2871
 
2872
            if (insn != skip_insn)
2873
              queue_insn (insn, 1);
2874
          }
2875
      }
2876
    }
2877
 
2878
  /* Now we can restore basic block notes and maintain precise cfg.  */
2879
  restore_bb_notes (*target_bb);
2880
 
2881
  last_clock_var = -1;
2882
 
2883
  advance = 0;
2884
 
2885
  sort_p = TRUE;
2886
  /* Loop until all the insns in BB are scheduled.  */
2887
  while ((*current_sched_info->schedule_more_p) ())
2888
    {
2889
      do
2890
        {
2891
          start_clock_var = clock_var;
2892
 
2893
          clock_var++;
2894
 
2895
          advance_one_cycle ();
2896
 
2897
          /* Add to the ready list all pending insns that can be issued now.
2898
             If there are no ready insns, increment clock until one
2899
             is ready and add all pending insns at that point to the ready
2900
             list.  */
2901
          queue_to_ready (&ready);
2902
 
2903
          gcc_assert (ready.n_ready);
2904
 
2905
          if (sched_verbose >= 2)
2906
            {
2907
              fprintf (sched_dump, ";;\t\tReady list after queue_to_ready:  ");
2908
              debug_ready_list (&ready);
2909
            }
2910
          advance -= clock_var - start_clock_var;
2911
        }
2912
      while (advance > 0);
2913
 
2914
      if (sort_p)
2915
        {
2916
          /* Sort the ready list based on priority.  */
2917
          ready_sort (&ready);
2918
 
2919
          if (sched_verbose >= 2)
2920
            {
2921
              fprintf (sched_dump, ";;\t\tReady list after ready_sort:  ");
2922
              debug_ready_list (&ready);
2923
            }
2924
        }
2925
 
2926
      /* We don't want md sched reorder to even see debug isns, so put
2927
         them out right away.  */
2928
      if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2929
        {
2930
          if (control_flow_insn_p (last_scheduled_insn))
2931
            {
2932
              *target_bb = current_sched_info->advance_target_bb
2933
                (*target_bb, 0);
2934
 
2935
              if (sched_verbose)
2936
                {
2937
                  rtx x;
2938
 
2939
                  x = next_real_insn (last_scheduled_insn);
2940
                  gcc_assert (x);
2941
                  dump_new_block_header (1, *target_bb, x, tail);
2942
                }
2943
 
2944
              last_scheduled_insn = bb_note (*target_bb);
2945
            }
2946
 
2947
          while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
2948
            {
2949
              rtx insn = ready_remove_first (&ready);
2950
              gcc_assert (DEBUG_INSN_P (insn));
2951
              (*current_sched_info->begin_schedule_ready) (insn,
2952
                                                           last_scheduled_insn);
2953
              move_insn (insn, last_scheduled_insn,
2954
                         current_sched_info->next_tail);
2955
              last_scheduled_insn = insn;
2956
              advance = schedule_insn (insn);
2957
              gcc_assert (advance == 0);
2958
              if (ready.n_ready > 0)
2959
                ready_sort (&ready);
2960
            }
2961
 
2962
          if (!ready.n_ready)
2963
            continue;
2964
        }
2965
 
2966
      /* Allow the target to reorder the list, typically for
2967
         better instruction bundling.  */
2968
      if (sort_p && targetm.sched.reorder
2969
          && (ready.n_ready == 0
2970
              || !SCHED_GROUP_P (ready_element (&ready, 0))))
2971
        can_issue_more =
2972
          targetm.sched.reorder (sched_dump, sched_verbose,
2973
                                 ready_lastpos (&ready),
2974
                                 &ready.n_ready, clock_var);
2975
      else
2976
        can_issue_more = issue_rate;
2977
 
2978
      first_cycle_insn_p = 1;
2979
      cycle_issued_insns = 0;
2980
      for (;;)
2981
        {
2982
          rtx insn;
2983
          int cost;
2984
          bool asm_p = false;
2985
 
2986
          if (sched_verbose >= 2)
2987
            {
2988
              fprintf (sched_dump, ";;\tReady list (t = %3d):  ",
2989
                       clock_var);
2990
              debug_ready_list (&ready);
2991
              if (sched_pressure_p)
2992
                print_curr_reg_pressure ();
2993
            }
2994
 
2995
          if (ready.n_ready == 0
2996
              && can_issue_more
2997
              && reload_completed)
2998
            {
2999
              /* Allow scheduling insns directly from the queue in case
3000
                 there's nothing better to do (ready list is empty) but
3001
                 there are still vacant dispatch slots in the current cycle.  */
3002
              if (sched_verbose >= 6)
3003
                fprintf (sched_dump,";;\t\tSecond chance\n");
3004
              memcpy (temp_state, curr_state, dfa_state_size);
3005
              if (early_queue_to_ready (temp_state, &ready))
3006
                ready_sort (&ready);
3007
            }
3008
 
3009
          if (ready.n_ready == 0
3010
              || !can_issue_more
3011
              || state_dead_lock_p (curr_state)
3012
              || !(*current_sched_info->schedule_more_p) ())
3013
            break;
3014
 
3015
          /* Select and remove the insn from the ready list.  */
3016
          if (sort_p)
3017
            {
3018
              int res;
3019
 
3020
              insn = NULL_RTX;
3021
              res = choose_ready (&ready, &insn);
3022
 
3023
              if (res < 0)
3024
                /* Finish cycle.  */
3025
                break;
3026
              if (res > 0)
3027
                /* Restart choose_ready ().  */
3028
                continue;
3029
 
3030
              gcc_assert (insn != NULL_RTX);
3031
            }
3032
          else
3033
            insn = ready_remove_first (&ready);
3034
 
3035
          if (sched_pressure_p && INSN_TICK (insn) > clock_var)
3036
            {
3037
              ready_add (&ready, insn, true);
3038
              advance = 1;
3039
              break;
3040
            }
3041
 
3042
          if (targetm.sched.dfa_new_cycle
3043
              && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
3044
                                              insn, last_clock_var,
3045
                                              clock_var, &sort_p))
3046
            /* SORT_P is used by the target to override sorting
3047
               of the ready list.  This is needed when the target
3048
               has modified its internal structures expecting that
3049
               the insn will be issued next.  As we need the insn
3050
               to have the highest priority (so it will be returned by
3051
               the ready_remove_first call above), we invoke
3052
               ready_add (&ready, insn, true).
3053
               But, still, there is one issue: INSN can be later
3054
               discarded by scheduler's front end through
3055
               current_sched_info->can_schedule_ready_p, hence, won't
3056
               be issued next.  */
3057
            {
3058
              ready_add (&ready, insn, true);
3059
              break;
3060
            }
3061
 
3062
          sort_p = TRUE;
3063
          memcpy (temp_state, curr_state, dfa_state_size);
3064
          if (recog_memoized (insn) < 0)
3065
            {
3066
              asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
3067
                       || asm_noperands (PATTERN (insn)) >= 0);
3068
              if (!first_cycle_insn_p && asm_p)
3069
                /* This is asm insn which is tried to be issued on the
3070
                   cycle not first.  Issue it on the next cycle.  */
3071
                cost = 1;
3072
              else
3073
                /* A USE insn, or something else we don't need to
3074
                   understand.  We can't pass these directly to
3075
                   state_transition because it will trigger a
3076
                   fatal error for unrecognizable insns.  */
3077
                cost = 0;
3078
            }
3079
          else if (sched_pressure_p)
3080
            cost = 0;
3081
          else
3082
            {
3083
              cost = state_transition (temp_state, insn);
3084
              if (cost < 0)
3085
                cost = 0;
3086
              else if (cost == 0)
3087
                cost = 1;
3088
            }
3089
 
3090
          if (cost >= 1)
3091
            {
3092
              queue_insn (insn, cost);
3093
              if (SCHED_GROUP_P (insn))
3094
                {
3095
                  advance = cost;
3096
                  break;
3097
                }
3098
 
3099
              continue;
3100
            }
3101
 
3102
          if (current_sched_info->can_schedule_ready_p
3103
              && ! (*current_sched_info->can_schedule_ready_p) (insn))
3104
            /* We normally get here only if we don't want to move
3105
               insn from the split block.  */
3106
            {
3107
              TODO_SPEC (insn) = (TODO_SPEC (insn) & ~SPECULATIVE) | HARD_DEP;
3108
              continue;
3109
            }
3110
 
3111
          /* DECISION is made.  */
3112
 
3113
          if (TODO_SPEC (insn) & SPECULATIVE)
3114
            generate_recovery_code (insn);
3115
 
3116
          if (control_flow_insn_p (last_scheduled_insn)
3117
              /* This is used to switch basic blocks by request
3118
                 from scheduler front-end (actually, sched-ebb.c only).
3119
                 This is used to process blocks with single fallthru
3120
                 edge.  If succeeding block has jump, it [jump] will try
3121
                 move at the end of current bb, thus corrupting CFG.  */
3122
              || current_sched_info->advance_target_bb (*target_bb, insn))
3123
            {
3124
              *target_bb = current_sched_info->advance_target_bb
3125
                (*target_bb, 0);
3126
 
3127
              if (sched_verbose)
3128
                {
3129
                  rtx x;
3130
 
3131
                  x = next_real_insn (last_scheduled_insn);
3132
                  gcc_assert (x);
3133
                  dump_new_block_header (1, *target_bb, x, tail);
3134
                }
3135
 
3136
              last_scheduled_insn = bb_note (*target_bb);
3137
            }
3138
 
3139
          /* Update counters, etc in the scheduler's front end.  */
3140
          (*current_sched_info->begin_schedule_ready) (insn,
3141
                                                       last_scheduled_insn);
3142
 
3143
          move_insn (insn, last_scheduled_insn, current_sched_info->next_tail);
3144
          reemit_notes (insn);
3145
          last_scheduled_insn = insn;
3146
 
3147
          if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
3148
            {
3149
              cycle_issued_insns++;
3150
              memcpy (curr_state, temp_state, dfa_state_size);
3151
            }
3152
 
3153
          if (targetm.sched.variable_issue)
3154
            can_issue_more =
3155
              targetm.sched.variable_issue (sched_dump, sched_verbose,
3156
                                            insn, can_issue_more);
3157
          /* A naked CLOBBER or USE generates no instruction, so do
3158
             not count them against the issue rate.  */
3159
          else if (GET_CODE (PATTERN (insn)) != USE
3160
                   && GET_CODE (PATTERN (insn)) != CLOBBER)
3161
            can_issue_more--;
3162
          advance = schedule_insn (insn);
3163
 
3164
          /* After issuing an asm insn we should start a new cycle.  */
3165
          if (advance == 0 && asm_p)
3166
            advance = 1;
3167
          if (advance != 0)
3168
            break;
3169
 
3170
          first_cycle_insn_p = 0;
3171
 
3172
          /* Sort the ready list based on priority.  This must be
3173
             redone here, as schedule_insn may have readied additional
3174
             insns that will not be sorted correctly.  */
3175
          if (ready.n_ready > 0)
3176
            ready_sort (&ready);
3177
 
3178
          /* Quickly go through debug insns such that md sched
3179
             reorder2 doesn't have to deal with debug insns.  */
3180
          if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))
3181
              && (*current_sched_info->schedule_more_p) ())
3182
            {
3183
              if (control_flow_insn_p (last_scheduled_insn))
3184
                {
3185
                  *target_bb = current_sched_info->advance_target_bb
3186
                    (*target_bb, 0);
3187
 
3188
                  if (sched_verbose)
3189
                    {
3190
                      rtx x;
3191
 
3192
                      x = next_real_insn (last_scheduled_insn);
3193
                      gcc_assert (x);
3194
                      dump_new_block_header (1, *target_bb, x, tail);
3195
                    }
3196
 
3197
                  last_scheduled_insn = bb_note (*target_bb);
3198
                }
3199
 
3200
              while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
3201
                {
3202
                  insn = ready_remove_first (&ready);
3203
                  gcc_assert (DEBUG_INSN_P (insn));
3204
                  (*current_sched_info->begin_schedule_ready)
3205
                    (insn, last_scheduled_insn);
3206
                  move_insn (insn, last_scheduled_insn,
3207
                             current_sched_info->next_tail);
3208
                  advance = schedule_insn (insn);
3209
                  last_scheduled_insn = insn;
3210
                  gcc_assert (advance == 0);
3211
                  if (ready.n_ready > 0)
3212
                    ready_sort (&ready);
3213
                }
3214
            }
3215
 
3216
          if (targetm.sched.reorder2
3217
              && (ready.n_ready == 0
3218
                  || !SCHED_GROUP_P (ready_element (&ready, 0))))
3219
            {
3220
              can_issue_more =
3221
                targetm.sched.reorder2 (sched_dump, sched_verbose,
3222
                                        ready.n_ready
3223
                                        ? ready_lastpos (&ready) : NULL,
3224
                                        &ready.n_ready, clock_var);
3225
            }
3226
        }
3227
    }
3228
 
3229
  /* Debug info.  */
3230
  if (sched_verbose)
3231
    {
3232
      fprintf (sched_dump, ";;\tReady list (final):  ");
3233
      debug_ready_list (&ready);
3234
    }
3235
 
3236
  if (current_sched_info->queue_must_finish_empty)
3237
    /* Sanity check -- queue must be empty now.  Meaningless if region has
3238
       multiple bbs.  */
3239
    gcc_assert (!q_size && !ready.n_ready && !ready.n_debug);
3240
  else
3241
    {
3242
      /* We must maintain QUEUE_INDEX between blocks in region.  */
3243
      for (i = ready.n_ready - 1; i >= 0; i--)
3244
        {
3245
          rtx x;
3246
 
3247
          x = ready_element (&ready, i);
3248
          QUEUE_INDEX (x) = QUEUE_NOWHERE;
3249
          TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3250
        }
3251
 
3252
      if (q_size)
3253
        for (i = 0; i <= max_insn_queue_index; i++)
3254
          {
3255
            rtx link;
3256
            for (link = insn_queue[i]; link; link = XEXP (link, 1))
3257
              {
3258
                rtx x;
3259
 
3260
                x = XEXP (link, 0);
3261
                QUEUE_INDEX (x) = QUEUE_NOWHERE;
3262
                TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
3263
              }
3264
            free_INSN_LIST_list (&insn_queue[i]);
3265
          }
3266
    }
3267
 
3268
  if (sched_verbose)
3269
    fprintf (sched_dump, ";;   total time = %d\n", clock_var);
3270
 
3271
  if (!current_sched_info->queue_must_finish_empty
3272
      || haifa_recovery_bb_recently_added_p)
3273
    {
3274
      /* INSN_TICK (minimum clock tick at which the insn becomes
3275
         ready) may be not correct for the insn in the subsequent
3276
         blocks of the region.  We should use a correct value of
3277
         `clock_var' or modify INSN_TICK.  It is better to keep
3278
         clock_var value equal to 0 at the start of a basic block.
3279
         Therefore we modify INSN_TICK here.  */
3280
      fix_inter_tick (NEXT_INSN (prev_head), last_scheduled_insn);
3281
    }
3282
 
3283
  if (targetm.sched.md_finish)
3284
    {
3285
      targetm.sched.md_finish (sched_dump, sched_verbose);
3286
      /* Target might have added some instructions to the scheduled block
3287
         in its md_finish () hook.  These new insns don't have any data
3288
         initialized and to identify them we extend h_i_d so that they'll
3289
         get zero luids.  */
3290
      sched_init_luids (NULL, NULL, NULL, NULL);
3291
    }
3292
 
3293
  if (sched_verbose)
3294
    fprintf (sched_dump, ";;   new head = %d\n;;   new tail = %d\n\n",
3295
             INSN_UID (head), INSN_UID (tail));
3296
 
3297
  /* Update head/tail boundaries.  */
3298
  head = NEXT_INSN (prev_head);
3299
  tail = last_scheduled_insn;
3300
 
3301
  head = restore_other_notes (head, NULL);
3302
 
3303
  current_sched_info->head = head;
3304
  current_sched_info->tail = tail;
3305
}
3306
 
3307
/* Set_priorities: compute priority of each insn in the block.  */
3308
 
3309
int
3310
set_priorities (rtx head, rtx tail)
3311
{
3312
  rtx insn;
3313
  int n_insn;
3314
  int sched_max_insns_priority =
3315
        current_sched_info->sched_max_insns_priority;
3316
  rtx prev_head;
3317
 
3318
  if (head == tail && (! INSN_P (head) || BOUNDARY_DEBUG_INSN_P (head)))
3319
    gcc_unreachable ();
3320
 
3321
  n_insn = 0;
3322
 
3323
  prev_head = PREV_INSN (head);
3324
  for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
3325
    {
3326
      if (!INSN_P (insn))
3327
        continue;
3328
 
3329
      n_insn++;
3330
      (void) priority (insn);
3331
 
3332
      gcc_assert (INSN_PRIORITY_KNOWN (insn));
3333
 
3334
      sched_max_insns_priority = MAX (sched_max_insns_priority,
3335
                                      INSN_PRIORITY (insn));
3336
    }
3337
 
3338
  current_sched_info->sched_max_insns_priority = sched_max_insns_priority;
3339
 
3340
  return n_insn;
3341
}
3342
 
3343
/* Set dump and sched_verbose for the desired debugging output.  If no
3344
   dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
3345
   For -fsched-verbose=N, N>=10, print everything to stderr.  */
3346
void
3347
setup_sched_dump (void)
3348
{
3349
  sched_verbose = sched_verbose_param;
3350
  if (sched_verbose_param == 0 && dump_file)
3351
    sched_verbose = 1;
3352
  sched_dump = ((sched_verbose_param >= 10 || !dump_file)
3353
                ? stderr : dump_file);
3354
}
3355
 
3356
/* Initialize some global state for the scheduler.  This function works
3357
   with the common data shared between all the schedulers.  It is called
3358
   from the scheduler specific initialization routine.  */
3359
 
3360
void
3361
sched_init (void)
3362
{
3363
  /* Disable speculative loads in their presence if cc0 defined.  */
3364
#ifdef HAVE_cc0
3365
  flag_schedule_speculative_load = 0;
3366
#endif
3367
 
3368
  sched_pressure_p = (flag_sched_pressure && ! reload_completed
3369
                      && common_sched_info->sched_pass_id == SCHED_RGN_PASS);
3370
  if (sched_pressure_p)
3371
    ira_setup_eliminable_regset ();
3372
 
3373
  /* Initialize SPEC_INFO.  */
3374
  if (targetm.sched.set_sched_flags)
3375
    {
3376
      spec_info = &spec_info_var;
3377
      targetm.sched.set_sched_flags (spec_info);
3378
 
3379
      if (spec_info->mask != 0)
3380
        {
3381
          spec_info->data_weakness_cutoff =
3382
            (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
3383
          spec_info->control_weakness_cutoff =
3384
            (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF)
3385
             * REG_BR_PROB_BASE) / 100;
3386
        }
3387
      else
3388
        /* So we won't read anything accidentally.  */
3389
        spec_info = NULL;
3390
 
3391
    }
3392
  else
3393
    /* So we won't read anything accidentally.  */
3394
    spec_info = 0;
3395
 
3396
  /* Initialize issue_rate.  */
3397
  if (targetm.sched.issue_rate)
3398
    issue_rate = targetm.sched.issue_rate ();
3399
  else
3400
    issue_rate = 1;
3401
 
3402
  if (cached_issue_rate != issue_rate)
3403
    {
3404
      cached_issue_rate = issue_rate;
3405
      /* To invalidate max_lookahead_tries:  */
3406
      cached_first_cycle_multipass_dfa_lookahead = 0;
3407
    }
3408
 
3409
  if (targetm.sched.first_cycle_multipass_dfa_lookahead)
3410
    dfa_lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
3411
  else
3412
    dfa_lookahead = 0;
3413
 
3414
  if (targetm.sched.init_dfa_pre_cycle_insn)
3415
    targetm.sched.init_dfa_pre_cycle_insn ();
3416
 
3417
  if (targetm.sched.init_dfa_post_cycle_insn)
3418
    targetm.sched.init_dfa_post_cycle_insn ();
3419
 
3420
  dfa_start ();
3421
  dfa_state_size = state_size ();
3422
 
3423
  init_alias_analysis ();
3424
 
3425
  df_set_flags (DF_LR_RUN_DCE);
3426
  df_note_add_problem ();
3427
 
3428
  /* More problems needed for interloop dep calculation in SMS.  */
3429
  if (common_sched_info->sched_pass_id == SCHED_SMS_PASS)
3430
    {
3431
      df_rd_add_problem ();
3432
      df_chain_add_problem (DF_DU_CHAIN + DF_UD_CHAIN);
3433
    }
3434
 
3435
  df_analyze ();
3436
 
3437
  /* Do not run DCE after reload, as this can kill nops inserted
3438
     by bundling.  */
3439
  if (reload_completed)
3440
    df_clear_flags (DF_LR_RUN_DCE);
3441
 
3442
  regstat_compute_calls_crossed ();
3443
 
3444
  if (targetm.sched.md_init_global)
3445
    targetm.sched.md_init_global (sched_dump, sched_verbose,
3446
                                  get_max_uid () + 1);
3447
 
3448
  if (sched_pressure_p)
3449
    {
3450
      int i, max_regno = max_reg_num ();
3451
 
3452
      ira_set_pseudo_classes (sched_verbose ? sched_dump : NULL);
3453
      sched_regno_cover_class
3454
        = (enum reg_class *) xmalloc (max_regno * sizeof (enum reg_class));
3455
      for (i = 0; i < max_regno; i++)
3456
        sched_regno_cover_class[i]
3457
          = (i < FIRST_PSEUDO_REGISTER
3458
             ? ira_class_translate[REGNO_REG_CLASS (i)]
3459
             : reg_cover_class (i));
3460
      curr_reg_live = BITMAP_ALLOC (NULL);
3461
      saved_reg_live = BITMAP_ALLOC (NULL);
3462
      region_ref_regs = BITMAP_ALLOC (NULL);
3463
    }
3464
 
3465
  curr_state = xmalloc (dfa_state_size);
3466
}
3467
 
3468
static void haifa_init_only_bb (basic_block, basic_block);
3469
 
3470
/* Initialize data structures specific to the Haifa scheduler.  */
3471
void
3472
haifa_sched_init (void)
3473
{
3474
  setup_sched_dump ();
3475
  sched_init ();
3476
 
3477
  if (spec_info != NULL)
3478
    {
3479
      sched_deps_info->use_deps_list = 1;
3480
      sched_deps_info->generate_spec_deps = 1;
3481
    }
3482
 
3483
  /* Initialize luids, dependency caches, target and h_i_d for the
3484
     whole function.  */
3485
  {
3486
    bb_vec_t bbs = VEC_alloc (basic_block, heap, n_basic_blocks);
3487
    basic_block bb;
3488
 
3489
    sched_init_bbs ();
3490
 
3491
    FOR_EACH_BB (bb)
3492
      VEC_quick_push (basic_block, bbs, bb);
3493
    sched_init_luids (bbs, NULL, NULL, NULL);
3494
    sched_deps_init (true);
3495
    sched_extend_target ();
3496
    haifa_init_h_i_d (bbs, NULL, NULL, NULL);
3497
 
3498
    VEC_free (basic_block, heap, bbs);
3499
  }
3500
 
3501
  sched_init_only_bb = haifa_init_only_bb;
3502
  sched_split_block = sched_split_block_1;
3503
  sched_create_empty_bb = sched_create_empty_bb_1;
3504
  haifa_recovery_bb_ever_added_p = false;
3505
 
3506
#ifdef ENABLE_CHECKING
3507
  /* This is used preferably for finding bugs in check_cfg () itself.
3508
     We must call sched_bbs_init () before check_cfg () because check_cfg ()
3509
     assumes that the last insn in the last bb has a non-null successor.  */
3510
  check_cfg (0, 0);
3511
#endif
3512
 
3513
  nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
3514
  before_recovery = 0;
3515
  after_recovery = 0;
3516
}
3517
 
3518
/* Finish work with the data specific to the Haifa scheduler.  */
3519
void
3520
haifa_sched_finish (void)
3521
{
3522
  sched_create_empty_bb = NULL;
3523
  sched_split_block = NULL;
3524
  sched_init_only_bb = NULL;
3525
 
3526
  if (spec_info && spec_info->dump)
3527
    {
3528
      char c = reload_completed ? 'a' : 'b';
3529
 
3530
      fprintf (spec_info->dump,
3531
               ";; %s:\n", current_function_name ());
3532
 
3533
      fprintf (spec_info->dump,
3534
               ";; Procedure %cr-begin-data-spec motions == %d\n",
3535
               c, nr_begin_data);
3536
      fprintf (spec_info->dump,
3537
               ";; Procedure %cr-be-in-data-spec motions == %d\n",
3538
               c, nr_be_in_data);
3539
      fprintf (spec_info->dump,
3540
               ";; Procedure %cr-begin-control-spec motions == %d\n",
3541
               c, nr_begin_control);
3542
      fprintf (spec_info->dump,
3543
               ";; Procedure %cr-be-in-control-spec motions == %d\n",
3544
               c, nr_be_in_control);
3545
    }
3546
 
3547
  /* Finalize h_i_d, dependency caches, and luids for the whole
3548
     function.  Target will be finalized in md_global_finish ().  */
3549
  sched_deps_finish ();
3550
  sched_finish_luids ();
3551
  current_sched_info = NULL;
3552
  sched_finish ();
3553
}
3554
 
3555
/* Free global data used during insn scheduling.  This function works with
3556
   the common data shared between the schedulers.  */
3557
 
3558
void
3559
sched_finish (void)
3560
{
3561
  haifa_finish_h_i_d ();
3562
  if (sched_pressure_p)
3563
    {
3564
      free (sched_regno_cover_class);
3565
      BITMAP_FREE (region_ref_regs);
3566
      BITMAP_FREE (saved_reg_live);
3567
      BITMAP_FREE (curr_reg_live);
3568
    }
3569
  free (curr_state);
3570
 
3571
  if (targetm.sched.md_finish_global)
3572
    targetm.sched.md_finish_global (sched_dump, sched_verbose);
3573
 
3574
  end_alias_analysis ();
3575
 
3576
  regstat_free_calls_crossed ();
3577
 
3578
  dfa_finish ();
3579
 
3580
#ifdef ENABLE_CHECKING
3581
  /* After reload ia64 backend clobbers CFG, so can't check anything.  */
3582
  if (!reload_completed)
3583
    check_cfg (0, 0);
3584
#endif
3585
}
3586
 
3587
/* Fix INSN_TICKs of the instructions in the current block as well as
3588
   INSN_TICKs of their dependents.
3589
   HEAD and TAIL are the begin and the end of the current scheduled block.  */
3590
static void
3591
fix_inter_tick (rtx head, rtx tail)
3592
{
3593
  /* Set of instructions with corrected INSN_TICK.  */
3594
  bitmap_head processed;
3595
  /* ??? It is doubtful if we should assume that cycle advance happens on
3596
     basic block boundaries.  Basically insns that are unconditionally ready
3597
     on the start of the block are more preferable then those which have
3598
     a one cycle dependency over insn from the previous block.  */
3599
  int next_clock = clock_var + 1;
3600
 
3601
  bitmap_initialize (&processed, 0);
3602
 
3603
  /* Iterates over scheduled instructions and fix their INSN_TICKs and
3604
     INSN_TICKs of dependent instructions, so that INSN_TICKs are consistent
3605
     across different blocks.  */
3606
  for (tail = NEXT_INSN (tail); head != tail; head = NEXT_INSN (head))
3607
    {
3608
      if (INSN_P (head))
3609
        {
3610
          int tick;
3611
          sd_iterator_def sd_it;
3612
          dep_t dep;
3613
 
3614
          tick = INSN_TICK (head);
3615
          gcc_assert (tick >= MIN_TICK);
3616
 
3617
          /* Fix INSN_TICK of instruction from just scheduled block.  */
3618
          if (!bitmap_bit_p (&processed, INSN_LUID (head)))
3619
            {
3620
              bitmap_set_bit (&processed, INSN_LUID (head));
3621
              tick -= next_clock;
3622
 
3623
              if (tick < MIN_TICK)
3624
                tick = MIN_TICK;
3625
 
3626
              INSN_TICK (head) = tick;
3627
            }
3628
 
3629
          FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
3630
            {
3631
              rtx next;
3632
 
3633
              next = DEP_CON (dep);
3634
              tick = INSN_TICK (next);
3635
 
3636
              if (tick != INVALID_TICK
3637
                  /* If NEXT has its INSN_TICK calculated, fix it.
3638
                     If not - it will be properly calculated from
3639
                     scratch later in fix_tick_ready.  */
3640
                  && !bitmap_bit_p (&processed, INSN_LUID (next)))
3641
                {
3642
                  bitmap_set_bit (&processed, INSN_LUID (next));
3643
                  tick -= next_clock;
3644
 
3645
                  if (tick < MIN_TICK)
3646
                    tick = MIN_TICK;
3647
 
3648
                  if (tick > INTER_TICK (next))
3649
                    INTER_TICK (next) = tick;
3650
                  else
3651
                    tick = INTER_TICK (next);
3652
 
3653
                  INSN_TICK (next) = tick;
3654
                }
3655
            }
3656
        }
3657
    }
3658
  bitmap_clear (&processed);
3659
}
3660
 
3661
static int haifa_speculate_insn (rtx, ds_t, rtx *);
3662
 
3663
/* Check if NEXT is ready to be added to the ready or queue list.
3664
   If "yes", add it to the proper list.
3665
   Returns:
3666
      -1 - is not ready yet,
3667
 
3668
 
3669
int
3670
try_ready (rtx next)
3671
{
3672
  ds_t old_ts, *ts;
3673
 
3674
  ts = &TODO_SPEC (next);
3675
  old_ts = *ts;
3676
 
3677
  gcc_assert (!(old_ts & ~(SPECULATIVE | HARD_DEP))
3678
              && ((old_ts & HARD_DEP)
3679
                  || (old_ts & SPECULATIVE)));
3680
 
3681
  if (sd_lists_empty_p (next, SD_LIST_BACK))
3682
    /* NEXT has all its dependencies resolved.  */
3683
    {
3684
      /* Remove HARD_DEP bit from NEXT's status.  */
3685
      *ts &= ~HARD_DEP;
3686
 
3687
      if (current_sched_info->flags & DO_SPECULATION)
3688
        /* Remove all speculative bits from NEXT's status.  */
3689
        *ts &= ~SPECULATIVE;
3690
    }
3691
  else
3692
    {
3693
      /* One of the NEXT's dependencies has been resolved.
3694
         Recalculate NEXT's status.  */
3695
 
3696
      *ts &= ~SPECULATIVE & ~HARD_DEP;
3697
 
3698
      if (sd_lists_empty_p (next, SD_LIST_HARD_BACK))
3699
        /* Now we've got NEXT with speculative deps only.
3700
           1. Look at the deps to see what we have to do.
3701
           2. Check if we can do 'todo'.  */
3702
        {
3703
          sd_iterator_def sd_it;
3704
          dep_t dep;
3705
          bool first_p = true;
3706
 
3707
          FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
3708
            {
3709
              ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
3710
 
3711
              if (DEBUG_INSN_P (DEP_PRO (dep))
3712
                  && !DEBUG_INSN_P (next))
3713
                continue;
3714
 
3715
              if (first_p)
3716
                {
3717
                  first_p = false;
3718
 
3719
                  *ts = ds;
3720
                }
3721
              else
3722
                *ts = ds_merge (*ts, ds);
3723
            }
3724
 
3725
          if (ds_weak (*ts) < spec_info->data_weakness_cutoff)
3726
            /* Too few points.  */
3727
            *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3728
        }
3729
      else
3730
        *ts |= HARD_DEP;
3731
    }
3732
 
3733
  if (*ts & HARD_DEP)
3734
    gcc_assert (*ts == old_ts
3735
                && QUEUE_INDEX (next) == QUEUE_NOWHERE);
3736
  else if (current_sched_info->new_ready)
3737
    *ts = current_sched_info->new_ready (next, *ts);
3738
 
3739
  /* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
3740
     have its original pattern or changed (speculative) one.  This is due
3741
     to changing ebb in region scheduling.
3742
     * But if (old_ts & SPECULATIVE), then we are pretty sure that insn
3743
     has speculative pattern.
3744
 
3745
     We can't assert (!(*ts & HARD_DEP) || *ts == old_ts) here because
3746
     control-speculative NEXT could have been discarded by sched-rgn.c
3747
     (the same case as when discarded by can_schedule_ready_p ()).  */
3748
 
3749
  if ((*ts & SPECULATIVE)
3750
      /* If (old_ts == *ts), then (old_ts & SPECULATIVE) and we don't
3751
         need to change anything.  */
3752
      && *ts != old_ts)
3753
    {
3754
      int res;
3755
      rtx new_pat;
3756
 
3757
      gcc_assert ((*ts & SPECULATIVE) && !(*ts & ~SPECULATIVE));
3758
 
3759
      res = haifa_speculate_insn (next, *ts, &new_pat);
3760
 
3761
      switch (res)
3762
        {
3763
        case -1:
3764
          /* It would be nice to change DEP_STATUS of all dependences,
3765
             which have ((DEP_STATUS & SPECULATIVE) == *ts) to HARD_DEP,
3766
             so we won't reanalyze anything.  */
3767
          *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
3768
          break;
3769
 
3770
        case 0:
3771
          /* We follow the rule, that every speculative insn
3772
             has non-null ORIG_PAT.  */
3773
          if (!ORIG_PAT (next))
3774
            ORIG_PAT (next) = PATTERN (next);
3775
          break;
3776
 
3777
        case 1:
3778
          if (!ORIG_PAT (next))
3779
            /* If we gonna to overwrite the original pattern of insn,
3780
               save it.  */
3781
            ORIG_PAT (next) = PATTERN (next);
3782
 
3783
          haifa_change_pattern (next, new_pat);
3784
          break;
3785
 
3786
        default:
3787
          gcc_unreachable ();
3788
        }
3789
    }
3790
 
3791
  /* We need to restore pattern only if (*ts == 0), because otherwise it is
3792
     either correct (*ts & SPECULATIVE),
3793
     or we simply don't care (*ts & HARD_DEP).  */
3794
 
3795
  gcc_assert (!ORIG_PAT (next)
3796
              || !IS_SPECULATION_BRANCHY_CHECK_P (next));
3797
 
3798
  if (*ts & HARD_DEP)
3799
    {
3800
      /* We can't assert (QUEUE_INDEX (next) == QUEUE_NOWHERE) here because
3801
         control-speculative NEXT could have been discarded by sched-rgn.c
3802
         (the same case as when discarded by can_schedule_ready_p ()).  */
3803
      /*gcc_assert (QUEUE_INDEX (next) == QUEUE_NOWHERE);*/
3804
 
3805
      change_queue_index (next, QUEUE_NOWHERE);
3806
      return -1;
3807
    }
3808
  else if (!(*ts & BEGIN_SPEC) && ORIG_PAT (next) && !IS_SPECULATION_CHECK_P (next))
3809
    /* We should change pattern of every previously speculative
3810
       instruction - and we determine if NEXT was speculative by using
3811
       ORIG_PAT field.  Except one case - speculation checks have ORIG_PAT
3812
       pat too, so skip them.  */
3813
    {
3814
      haifa_change_pattern (next, ORIG_PAT (next));
3815
      ORIG_PAT (next) = 0;
3816
    }
3817
 
3818
  if (sched_verbose >= 2)
3819
    {
3820
      int s = TODO_SPEC (next);
3821
 
3822
      fprintf (sched_dump, ";;\t\tdependencies resolved: insn %s",
3823
               (*current_sched_info->print_insn) (next, 0));
3824
 
3825
      if (spec_info && spec_info->dump)
3826
        {
3827
          if (s & BEGIN_DATA)
3828
            fprintf (spec_info->dump, "; data-spec;");
3829
          if (s & BEGIN_CONTROL)
3830
            fprintf (spec_info->dump, "; control-spec;");
3831
          if (s & BE_IN_CONTROL)
3832
            fprintf (spec_info->dump, "; in-control-spec;");
3833
        }
3834
 
3835
      fprintf (sched_dump, "\n");
3836
    }
3837
 
3838
  adjust_priority (next);
3839
 
3840
  return fix_tick_ready (next);
3841
}
3842
 
3843
/* Calculate INSN_TICK of NEXT and add it to either ready or queue list.  */
3844
static int
3845
fix_tick_ready (rtx next)
3846
{
3847
  int tick, delay;
3848
 
3849
  if (!sd_lists_empty_p (next, SD_LIST_RES_BACK))
3850
    {
3851
      int full_p;
3852
      sd_iterator_def sd_it;
3853
      dep_t dep;
3854
 
3855
      tick = INSN_TICK (next);
3856
      /* if tick is not equal to INVALID_TICK, then update
3857
         INSN_TICK of NEXT with the most recent resolved dependence
3858
         cost.  Otherwise, recalculate from scratch.  */
3859
      full_p = (tick == INVALID_TICK);
3860
 
3861
      FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
3862
        {
3863
          rtx pro = DEP_PRO (dep);
3864
          int tick1;
3865
 
3866
          gcc_assert (INSN_TICK (pro) >= MIN_TICK);
3867
 
3868
          tick1 = INSN_TICK (pro) + dep_cost (dep);
3869
          if (tick1 > tick)
3870
            tick = tick1;
3871
 
3872
          if (!full_p)
3873
            break;
3874
        }
3875
    }
3876
  else
3877
    tick = -1;
3878
 
3879
  INSN_TICK (next) = tick;
3880
 
3881
  delay = tick - clock_var;
3882
  if (delay <= 0 || sched_pressure_p)
3883
    delay = QUEUE_READY;
3884
 
3885
  change_queue_index (next, delay);
3886
 
3887
  return delay;
3888
}
3889
 
3890
/* Move NEXT to the proper queue list with (DELAY >= 1),
3891
   or add it to the ready list (DELAY == QUEUE_READY),
3892
   or remove it from ready and queue lists at all (DELAY == QUEUE_NOWHERE).  */
3893
static void
3894
change_queue_index (rtx next, int delay)
3895
{
3896
  int i = QUEUE_INDEX (next);
3897
 
3898
  gcc_assert (QUEUE_NOWHERE <= delay && delay <= max_insn_queue_index
3899
              && delay != 0);
3900
  gcc_assert (i != QUEUE_SCHEDULED);
3901
 
3902
  if ((delay > 0 && NEXT_Q_AFTER (q_ptr, delay) == i)
3903
      || (delay < 0 && delay == i))
3904
    /* We have nothing to do.  */
3905
    return;
3906
 
3907
  /* Remove NEXT from wherever it is now.  */
3908
  if (i == QUEUE_READY)
3909
    ready_remove_insn (next);
3910
  else if (i >= 0)
3911
    queue_remove (next);
3912
 
3913
  /* Add it to the proper place.  */
3914
  if (delay == QUEUE_READY)
3915
    ready_add (readyp, next, false);
3916
  else if (delay >= 1)
3917
    queue_insn (next, delay);
3918
 
3919
  if (sched_verbose >= 2)
3920
    {
3921
      fprintf (sched_dump, ";;\t\ttick updated: insn %s",
3922
               (*current_sched_info->print_insn) (next, 0));
3923
 
3924
      if (delay == QUEUE_READY)
3925
        fprintf (sched_dump, " into ready\n");
3926
      else if (delay >= 1)
3927
        fprintf (sched_dump, " into queue with cost=%d\n", delay);
3928
      else
3929
        fprintf (sched_dump, " removed from ready or queue lists\n");
3930
    }
3931
}
3932
 
3933
static int sched_ready_n_insns = -1;
3934
 
3935
/* Initialize per region data structures.  */
3936
void
3937
sched_extend_ready_list (int new_sched_ready_n_insns)
3938
{
3939
  int i;
3940
 
3941
  if (sched_ready_n_insns == -1)
3942
    /* At the first call we need to initialize one more choice_stack
3943
       entry.  */
3944
    {
3945
      i = 0;
3946
      sched_ready_n_insns = 0;
3947
    }
3948
  else
3949
    i = sched_ready_n_insns + 1;
3950
 
3951
  ready.veclen = new_sched_ready_n_insns + issue_rate;
3952
  ready.vec = XRESIZEVEC (rtx, ready.vec, ready.veclen);
3953
 
3954
  gcc_assert (new_sched_ready_n_insns >= sched_ready_n_insns);
3955
 
3956
  ready_try = (char *) xrecalloc (ready_try, new_sched_ready_n_insns,
3957
                                  sched_ready_n_insns, sizeof (*ready_try));
3958
 
3959
  /* We allocate +1 element to save initial state in the choice_stack[0]
3960
     entry.  */
3961
  choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
3962
                             new_sched_ready_n_insns + 1);
3963
 
3964
  for (; i <= new_sched_ready_n_insns; i++)
3965
    choice_stack[i].state = xmalloc (dfa_state_size);
3966
 
3967
  sched_ready_n_insns = new_sched_ready_n_insns;
3968
}
3969
 
3970
/* Free per region data structures.  */
3971
void
3972
sched_finish_ready_list (void)
3973
{
3974
  int i;
3975
 
3976
  free (ready.vec);
3977
  ready.vec = NULL;
3978
  ready.veclen = 0;
3979
 
3980
  free (ready_try);
3981
  ready_try = NULL;
3982
 
3983
  for (i = 0; i <= sched_ready_n_insns; i++)
3984
    free (choice_stack [i].state);
3985
  free (choice_stack);
3986
  choice_stack = NULL;
3987
 
3988
  sched_ready_n_insns = -1;
3989
}
3990
 
3991
static int
3992
haifa_luid_for_non_insn (rtx x)
3993
{
3994
  gcc_assert (NOTE_P (x) || LABEL_P (x));
3995
 
3996
  return 0;
3997
}
3998
 
3999
/* Generates recovery code for INSN.  */
4000
static void
4001
generate_recovery_code (rtx insn)
4002
{
4003
  if (TODO_SPEC (insn) & BEGIN_SPEC)
4004
    begin_speculative_block (insn);
4005
 
4006
  /* Here we have insn with no dependencies to
4007
     instructions other then CHECK_SPEC ones.  */
4008
 
4009
  if (TODO_SPEC (insn) & BE_IN_SPEC)
4010
    add_to_speculative_block (insn);
4011
}
4012
 
4013
/* Helper function.
4014
   Tries to add speculative dependencies of type FS between instructions
4015
   in deps_list L and TWIN.  */
4016
static void
4017
process_insn_forw_deps_be_in_spec (rtx insn, rtx twin, ds_t fs)
4018
{
4019
  sd_iterator_def sd_it;
4020
  dep_t dep;
4021
 
4022
  FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
4023
    {
4024
      ds_t ds;
4025
      rtx consumer;
4026
 
4027
      consumer = DEP_CON (dep);
4028
 
4029
      ds = DEP_STATUS (dep);
4030
 
4031
      if (/* If we want to create speculative dep.  */
4032
          fs
4033
          /* And we can do that because this is a true dep.  */
4034
          && (ds & DEP_TYPES) == DEP_TRUE)
4035
        {
4036
          gcc_assert (!(ds & BE_IN_SPEC));
4037
 
4038
          if (/* If this dep can be overcome with 'begin speculation'.  */
4039
              ds & BEGIN_SPEC)
4040
            /* Then we have a choice: keep the dep 'begin speculative'
4041
               or transform it into 'be in speculative'.  */
4042
            {
4043
              if (/* In try_ready we assert that if insn once became ready
4044
                     it can be removed from the ready (or queue) list only
4045
                     due to backend decision.  Hence we can't let the
4046
                     probability of the speculative dep to decrease.  */
4047
                  ds_weak (ds) <= ds_weak (fs))
4048
                {
4049
                  ds_t new_ds;
4050
 
4051
                  new_ds = (ds & ~BEGIN_SPEC) | fs;
4052
 
4053
                  if (/* consumer can 'be in speculative'.  */
4054
                      sched_insn_is_legitimate_for_speculation_p (consumer,
4055
                                                                  new_ds))
4056
                    /* Transform it to be in speculative.  */
4057
                    ds = new_ds;
4058
                }
4059
            }
4060
          else
4061
            /* Mark the dep as 'be in speculative'.  */
4062
            ds |= fs;
4063
        }
4064
 
4065
      {
4066
        dep_def _new_dep, *new_dep = &_new_dep;
4067
 
4068
        init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
4069
        sd_add_dep (new_dep, false);
4070
      }
4071
    }
4072
}
4073
 
4074
/* Generates recovery code for BEGIN speculative INSN.  */
4075
static void
4076
begin_speculative_block (rtx insn)
4077
{
4078
  if (TODO_SPEC (insn) & BEGIN_DATA)
4079
    nr_begin_data++;
4080
  if (TODO_SPEC (insn) & BEGIN_CONTROL)
4081
    nr_begin_control++;
4082
 
4083
  create_check_block_twin (insn, false);
4084
 
4085
  TODO_SPEC (insn) &= ~BEGIN_SPEC;
4086
}
4087
 
4088
static void haifa_init_insn (rtx);
4089
 
4090
/* Generates recovery code for BE_IN speculative INSN.  */
4091
static void
4092
add_to_speculative_block (rtx insn)
4093
{
4094
  ds_t ts;
4095
  sd_iterator_def sd_it;
4096
  dep_t dep;
4097
  rtx twins = NULL;
4098
  rtx_vec_t priorities_roots;
4099
 
4100
  ts = TODO_SPEC (insn);
4101
  gcc_assert (!(ts & ~BE_IN_SPEC));
4102
 
4103
  if (ts & BE_IN_DATA)
4104
    nr_be_in_data++;
4105
  if (ts & BE_IN_CONTROL)
4106
    nr_be_in_control++;
4107
 
4108
  TODO_SPEC (insn) &= ~BE_IN_SPEC;
4109
  gcc_assert (!TODO_SPEC (insn));
4110
 
4111
  DONE_SPEC (insn) |= ts;
4112
 
4113
  /* First we convert all simple checks to branchy.  */
4114
  for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4115
       sd_iterator_cond (&sd_it, &dep);)
4116
    {
4117
      rtx check = DEP_PRO (dep);
4118
 
4119
      if (IS_SPECULATION_SIMPLE_CHECK_P (check))
4120
        {
4121
          create_check_block_twin (check, true);
4122
 
4123
          /* Restart search.  */
4124
          sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4125
        }
4126
      else
4127
        /* Continue search.  */
4128
        sd_iterator_next (&sd_it);
4129
    }
4130
 
4131
  priorities_roots = NULL;
4132
  clear_priorities (insn, &priorities_roots);
4133
 
4134
  while (1)
4135
    {
4136
      rtx check, twin;
4137
      basic_block rec;
4138
 
4139
      /* Get the first backward dependency of INSN.  */
4140
      sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4141
      if (!sd_iterator_cond (&sd_it, &dep))
4142
        /* INSN has no backward dependencies left.  */
4143
        break;
4144
 
4145
      gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
4146
                  && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
4147
                  && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4148
 
4149
      check = DEP_PRO (dep);
4150
 
4151
      gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
4152
                  && QUEUE_INDEX (check) == QUEUE_NOWHERE);
4153
 
4154
      rec = BLOCK_FOR_INSN (check);
4155
 
4156
      twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
4157
      haifa_init_insn (twin);
4158
 
4159
      sd_copy_back_deps (twin, insn, true);
4160
 
4161
      if (sched_verbose && spec_info->dump)
4162
        /* INSN_BB (insn) isn't determined for twin insns yet.
4163
           So we can't use current_sched_info->print_insn.  */
4164
        fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4165
                 INSN_UID (twin), rec->index);
4166
 
4167
      twins = alloc_INSN_LIST (twin, twins);
4168
 
4169
      /* Add dependences between TWIN and all appropriate
4170
         instructions from REC.  */
4171
      FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
4172
        {
4173
          rtx pro = DEP_PRO (dep);
4174
 
4175
          gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
4176
 
4177
          /* INSN might have dependencies from the instructions from
4178
             several recovery blocks.  At this iteration we process those
4179
             producers that reside in REC.  */
4180
          if (BLOCK_FOR_INSN (pro) == rec)
4181
            {
4182
              dep_def _new_dep, *new_dep = &_new_dep;
4183
 
4184
              init_dep (new_dep, pro, twin, REG_DEP_TRUE);
4185
              sd_add_dep (new_dep, false);
4186
            }
4187
        }
4188
 
4189
      process_insn_forw_deps_be_in_spec (insn, twin, ts);
4190
 
4191
      /* Remove all dependencies between INSN and insns in REC.  */
4192
      for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4193
           sd_iterator_cond (&sd_it, &dep);)
4194
        {
4195
          rtx pro = DEP_PRO (dep);
4196
 
4197
          if (BLOCK_FOR_INSN (pro) == rec)
4198
            sd_delete_dep (sd_it);
4199
          else
4200
            sd_iterator_next (&sd_it);
4201
        }
4202
    }
4203
 
4204
  /* We couldn't have added the dependencies between INSN and TWINS earlier
4205
     because that would make TWINS appear in the INSN_BACK_DEPS (INSN).  */
4206
  while (twins)
4207
    {
4208
      rtx twin;
4209
 
4210
      twin = XEXP (twins, 0);
4211
 
4212
      {
4213
        dep_def _new_dep, *new_dep = &_new_dep;
4214
 
4215
        init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4216
        sd_add_dep (new_dep, false);
4217
      }
4218
 
4219
      twin = XEXP (twins, 1);
4220
      free_INSN_LIST_node (twins);
4221
      twins = twin;
4222
    }
4223
 
4224
  calc_priorities (priorities_roots);
4225
  VEC_free (rtx, heap, priorities_roots);
4226
}
4227
 
4228
/* Extends and fills with zeros (only the new part) array pointed to by P.  */
4229
void *
4230
xrecalloc (void *p, size_t new_nmemb, size_t old_nmemb, size_t size)
4231
{
4232
  gcc_assert (new_nmemb >= old_nmemb);
4233
  p = XRESIZEVAR (void, p, new_nmemb * size);
4234
  memset (((char *) p) + old_nmemb * size, 0, (new_nmemb - old_nmemb) * size);
4235
  return p;
4236
}
4237
 
4238
/* Helper function.
4239
   Find fallthru edge from PRED.  */
4240
edge
4241
find_fallthru_edge (basic_block pred)
4242
{
4243
  edge e;
4244
  edge_iterator ei;
4245
  basic_block succ;
4246
 
4247
  succ = pred->next_bb;
4248
  gcc_assert (succ->prev_bb == pred);
4249
 
4250
  if (EDGE_COUNT (pred->succs) <= EDGE_COUNT (succ->preds))
4251
    {
4252
      FOR_EACH_EDGE (e, ei, pred->succs)
4253
        if (e->flags & EDGE_FALLTHRU)
4254
          {
4255
            gcc_assert (e->dest == succ);
4256
            return e;
4257
          }
4258
    }
4259
  else
4260
    {
4261
      FOR_EACH_EDGE (e, ei, succ->preds)
4262
        if (e->flags & EDGE_FALLTHRU)
4263
          {
4264
            gcc_assert (e->src == pred);
4265
            return e;
4266
          }
4267
    }
4268
 
4269
  return NULL;
4270
}
4271
 
4272
/* Extend per basic block data structures.  */
4273
static void
4274
sched_extend_bb (void)
4275
{
4276
  rtx insn;
4277
 
4278
  /* The following is done to keep current_sched_info->next_tail non null.  */
4279
  insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
4280
  if (NEXT_INSN (insn) == 0
4281
      || (!NOTE_P (insn)
4282
          && !LABEL_P (insn)
4283
          /* Don't emit a NOTE if it would end up before a BARRIER.  */
4284
          && !BARRIER_P (NEXT_INSN (insn))))
4285
    {
4286
      rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
4287
      /* Make insn appear outside BB.  */
4288
      set_block_for_insn (note, NULL);
4289
      BB_END (EXIT_BLOCK_PTR->prev_bb) = insn;
4290
    }
4291
}
4292
 
4293
/* Init per basic block data structures.  */
4294
void
4295
sched_init_bbs (void)
4296
{
4297
  sched_extend_bb ();
4298
}
4299
 
4300
/* Initialize BEFORE_RECOVERY variable.  */
4301
static void
4302
init_before_recovery (basic_block *before_recovery_ptr)
4303
{
4304
  basic_block last;
4305
  edge e;
4306
 
4307
  last = EXIT_BLOCK_PTR->prev_bb;
4308
  e = find_fallthru_edge (last);
4309
 
4310
  if (e)
4311
    {
4312
      /* We create two basic blocks:
4313
         1. Single instruction block is inserted right after E->SRC
4314
         and has jump to
4315
         2. Empty block right before EXIT_BLOCK.
4316
         Between these two blocks recovery blocks will be emitted.  */
4317
 
4318
      basic_block single, empty;
4319
      rtx x, label;
4320
 
4321
      /* If the fallthrough edge to exit we've found is from the block we've
4322
         created before, don't do anything more.  */
4323
      if (last == after_recovery)
4324
        return;
4325
 
4326
      adding_bb_to_current_region_p = false;
4327
 
4328
      single = sched_create_empty_bb (last);
4329
      empty = sched_create_empty_bb (single);
4330
 
4331
      /* Add new blocks to the root loop.  */
4332
      if (current_loops != NULL)
4333
        {
4334
          add_bb_to_loop (single, VEC_index (loop_p, current_loops->larray, 0));
4335
          add_bb_to_loop (empty, VEC_index (loop_p, current_loops->larray, 0));
4336
        }
4337
 
4338
      single->count = last->count;
4339
      empty->count = last->count;
4340
      single->frequency = last->frequency;
4341
      empty->frequency = last->frequency;
4342
      BB_COPY_PARTITION (single, last);
4343
      BB_COPY_PARTITION (empty, last);
4344
 
4345
      redirect_edge_succ (e, single);
4346
      make_single_succ_edge (single, empty, 0);
4347
      make_single_succ_edge (empty, EXIT_BLOCK_PTR,
4348
                             EDGE_FALLTHRU | EDGE_CAN_FALLTHRU);
4349
 
4350
      label = block_label (empty);
4351
      x = emit_jump_insn_after (gen_jump (label), BB_END (single));
4352
      JUMP_LABEL (x) = label;
4353
      LABEL_NUSES (label)++;
4354
      haifa_init_insn (x);
4355
 
4356
      emit_barrier_after (x);
4357
 
4358
      sched_init_only_bb (empty, NULL);
4359
      sched_init_only_bb (single, NULL);
4360
      sched_extend_bb ();
4361
 
4362
      adding_bb_to_current_region_p = true;
4363
      before_recovery = single;
4364
      after_recovery = empty;
4365
 
4366
      if (before_recovery_ptr)
4367
        *before_recovery_ptr = before_recovery;
4368
 
4369
      if (sched_verbose >= 2 && spec_info->dump)
4370
        fprintf (spec_info->dump,
4371
                 ";;\t\tFixed fallthru to EXIT : %d->>%d->%d->>EXIT\n",
4372
                 last->index, single->index, empty->index);
4373
    }
4374
  else
4375
    before_recovery = last;
4376
}
4377
 
4378
/* Returns new recovery block.  */
4379
basic_block
4380
sched_create_recovery_block (basic_block *before_recovery_ptr)
4381
{
4382
  rtx label;
4383
  rtx barrier;
4384
  basic_block rec;
4385
 
4386
  haifa_recovery_bb_recently_added_p = true;
4387
  haifa_recovery_bb_ever_added_p = true;
4388
 
4389
  init_before_recovery (before_recovery_ptr);
4390
 
4391
  barrier = get_last_bb_insn (before_recovery);
4392
  gcc_assert (BARRIER_P (barrier));
4393
 
4394
  label = emit_label_after (gen_label_rtx (), barrier);
4395
 
4396
  rec = create_basic_block (label, label, before_recovery);
4397
 
4398
  /* A recovery block always ends with an unconditional jump.  */
4399
  emit_barrier_after (BB_END (rec));
4400
 
4401
  if (BB_PARTITION (before_recovery) != BB_UNPARTITIONED)
4402
    BB_SET_PARTITION (rec, BB_COLD_PARTITION);
4403
 
4404
  if (sched_verbose && spec_info->dump)
4405
    fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
4406
             rec->index);
4407
 
4408
  return rec;
4409
}
4410
 
4411
/* Create edges: FIRST_BB -> REC; FIRST_BB -> SECOND_BB; REC -> SECOND_BB
4412
   and emit necessary jumps.  */
4413
void
4414
sched_create_recovery_edges (basic_block first_bb, basic_block rec,
4415
                             basic_block second_bb)
4416
{
4417
  rtx label;
4418
  rtx jump;
4419
  int edge_flags;
4420
 
4421
  /* This is fixing of incoming edge.  */
4422
  /* ??? Which other flags should be specified?  */
4423
  if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
4424
    /* Partition type is the same, if it is "unpartitioned".  */
4425
    edge_flags = EDGE_CROSSING;
4426
  else
4427
    edge_flags = 0;
4428
 
4429
  make_edge (first_bb, rec, edge_flags);
4430
  label = block_label (second_bb);
4431
  jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
4432
  JUMP_LABEL (jump) = label;
4433
  LABEL_NUSES (label)++;
4434
 
4435
  if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
4436
    /* Partition type is the same, if it is "unpartitioned".  */
4437
    {
4438
      /* Rewritten from cfgrtl.c.  */
4439
      if (flag_reorder_blocks_and_partition
4440
          && targetm.have_named_sections)
4441
        {
4442
          /* We don't need the same note for the check because
4443
             any_condjump_p (check) == true.  */
4444
          add_reg_note (jump, REG_CROSSING_JUMP, NULL_RTX);
4445
        }
4446
      edge_flags = EDGE_CROSSING;
4447
    }
4448
  else
4449
    edge_flags = 0;
4450
 
4451
  make_single_succ_edge (rec, second_bb, edge_flags);
4452
}
4453
 
4454
/* This function creates recovery code for INSN.  If MUTATE_P is nonzero,
4455
   INSN is a simple check, that should be converted to branchy one.  */
4456
static void
4457
create_check_block_twin (rtx insn, bool mutate_p)
4458
{
4459
  basic_block rec;
4460
  rtx label, check, twin;
4461
  ds_t fs;
4462
  sd_iterator_def sd_it;
4463
  dep_t dep;
4464
  dep_def _new_dep, *new_dep = &_new_dep;
4465
  ds_t todo_spec;
4466
 
4467
  gcc_assert (ORIG_PAT (insn) != NULL_RTX);
4468
 
4469
  if (!mutate_p)
4470
    todo_spec = TODO_SPEC (insn);
4471
  else
4472
    {
4473
      gcc_assert (IS_SPECULATION_SIMPLE_CHECK_P (insn)
4474
                  && (TODO_SPEC (insn) & SPECULATIVE) == 0);
4475
 
4476
      todo_spec = CHECK_SPEC (insn);
4477
    }
4478
 
4479
  todo_spec &= SPECULATIVE;
4480
 
4481
  /* Create recovery block.  */
4482
  if (mutate_p || targetm.sched.needs_block_p (todo_spec))
4483
    {
4484
      rec = sched_create_recovery_block (NULL);
4485
      label = BB_HEAD (rec);
4486
    }
4487
  else
4488
    {
4489
      rec = EXIT_BLOCK_PTR;
4490
      label = NULL_RTX;
4491
    }
4492
 
4493
  /* Emit CHECK.  */
4494
  check = targetm.sched.gen_spec_check (insn, label, todo_spec);
4495
 
4496
  if (rec != EXIT_BLOCK_PTR)
4497
    {
4498
      /* To have mem_reg alive at the beginning of second_bb,
4499
         we emit check BEFORE insn, so insn after splitting
4500
         insn will be at the beginning of second_bb, which will
4501
         provide us with the correct life information.  */
4502
      check = emit_jump_insn_before (check, insn);
4503
      JUMP_LABEL (check) = label;
4504
      LABEL_NUSES (label)++;
4505
    }
4506
  else
4507
    check = emit_insn_before (check, insn);
4508
 
4509
  /* Extend data structures.  */
4510
  haifa_init_insn (check);
4511
 
4512
  /* CHECK is being added to current region.  Extend ready list.  */
4513
  gcc_assert (sched_ready_n_insns != -1);
4514
  sched_extend_ready_list (sched_ready_n_insns + 1);
4515
 
4516
  if (current_sched_info->add_remove_insn)
4517
    current_sched_info->add_remove_insn (insn, 0);
4518
 
4519
  RECOVERY_BLOCK (check) = rec;
4520
 
4521
  if (sched_verbose && spec_info->dump)
4522
    fprintf (spec_info->dump, ";;\t\tGenerated check insn : %s\n",
4523
             (*current_sched_info->print_insn) (check, 0));
4524
 
4525
  gcc_assert (ORIG_PAT (insn));
4526
 
4527
  /* Initialize TWIN (twin is a duplicate of original instruction
4528
     in the recovery block).  */
4529
  if (rec != EXIT_BLOCK_PTR)
4530
    {
4531
      sd_iterator_def sd_it;
4532
      dep_t dep;
4533
 
4534
      FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
4535
        if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
4536
          {
4537
            struct _dep _dep2, *dep2 = &_dep2;
4538
 
4539
            init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
4540
 
4541
            sd_add_dep (dep2, true);
4542
          }
4543
 
4544
      twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
4545
      haifa_init_insn (twin);
4546
 
4547
      if (sched_verbose && spec_info->dump)
4548
        /* INSN_BB (insn) isn't determined for twin insns yet.
4549
           So we can't use current_sched_info->print_insn.  */
4550
        fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
4551
                 INSN_UID (twin), rec->index);
4552
    }
4553
  else
4554
    {
4555
      ORIG_PAT (check) = ORIG_PAT (insn);
4556
      HAS_INTERNAL_DEP (check) = 1;
4557
      twin = check;
4558
      /* ??? We probably should change all OUTPUT dependencies to
4559
         (TRUE | OUTPUT).  */
4560
    }
4561
 
4562
  /* Copy all resolved back dependencies of INSN to TWIN.  This will
4563
     provide correct value for INSN_TICK (TWIN).  */
4564
  sd_copy_back_deps (twin, insn, true);
4565
 
4566
  if (rec != EXIT_BLOCK_PTR)
4567
    /* In case of branchy check, fix CFG.  */
4568
    {
4569
      basic_block first_bb, second_bb;
4570
      rtx jump;
4571
 
4572
      first_bb = BLOCK_FOR_INSN (check);
4573
      second_bb = sched_split_block (first_bb, check);
4574
 
4575
      sched_create_recovery_edges (first_bb, rec, second_bb);
4576
 
4577
      sched_init_only_bb (second_bb, first_bb);
4578
      sched_init_only_bb (rec, EXIT_BLOCK_PTR);
4579
 
4580
      jump = BB_END (rec);
4581
      haifa_init_insn (jump);
4582
    }
4583
 
4584
  /* Move backward dependences from INSN to CHECK and
4585
     move forward dependences from INSN to TWIN.  */
4586
 
4587
  /* First, create dependencies between INSN's producers and CHECK & TWIN.  */
4588
  FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
4589
    {
4590
      rtx pro = DEP_PRO (dep);
4591
      ds_t ds;
4592
 
4593
      /* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
4594
         check --TRUE--> producer  ??? or ANTI ???
4595
         twin  --TRUE--> producer
4596
         twin  --ANTI--> check
4597
 
4598
         If BEGIN_CONTROL: [insn ~~ANTI~~> producer]:
4599
         check --ANTI--> producer
4600
         twin  --ANTI--> producer
4601
         twin  --ANTI--> check
4602
 
4603
         If BE_IN_SPEC: [insn ~~TRUE~~> producer]:
4604
         check ~~TRUE~~> producer
4605
         twin  ~~TRUE~~> producer
4606
         twin  --ANTI--> check  */
4607
 
4608
      ds = DEP_STATUS (dep);
4609
 
4610
      if (ds & BEGIN_SPEC)
4611
        {
4612
          gcc_assert (!mutate_p);
4613
          ds &= ~BEGIN_SPEC;
4614
        }
4615
 
4616
      init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
4617
      sd_add_dep (new_dep, false);
4618
 
4619
      if (rec != EXIT_BLOCK_PTR)
4620
        {
4621
          DEP_CON (new_dep) = twin;
4622
          sd_add_dep (new_dep, false);
4623
        }
4624
    }
4625
 
4626
  /* Second, remove backward dependencies of INSN.  */
4627
  for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
4628
       sd_iterator_cond (&sd_it, &dep);)
4629
    {
4630
      if ((DEP_STATUS (dep) & BEGIN_SPEC)
4631
          || mutate_p)
4632
        /* We can delete this dep because we overcome it with
4633
           BEGIN_SPECULATION.  */
4634
        sd_delete_dep (sd_it);
4635
      else
4636
        sd_iterator_next (&sd_it);
4637
    }
4638
 
4639
  /* Future Speculations.  Determine what BE_IN speculations will be like.  */
4640
  fs = 0;
4641
 
4642
  /* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
4643
     here.  */
4644
 
4645
  gcc_assert (!DONE_SPEC (insn));
4646
 
4647
  if (!mutate_p)
4648
    {
4649
      ds_t ts = TODO_SPEC (insn);
4650
 
4651
      DONE_SPEC (insn) = ts & BEGIN_SPEC;
4652
      CHECK_SPEC (check) = ts & BEGIN_SPEC;
4653
 
4654
      /* Luckiness of future speculations solely depends upon initial
4655
         BEGIN speculation.  */
4656
      if (ts & BEGIN_DATA)
4657
        fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
4658
      if (ts & BEGIN_CONTROL)
4659
        fs = set_dep_weak (fs, BE_IN_CONTROL,
4660
                           get_dep_weak (ts, BEGIN_CONTROL));
4661
    }
4662
  else
4663
    CHECK_SPEC (check) = CHECK_SPEC (insn);
4664
 
4665
  /* Future speculations: call the helper.  */
4666
  process_insn_forw_deps_be_in_spec (insn, twin, fs);
4667
 
4668
  if (rec != EXIT_BLOCK_PTR)
4669
    {
4670
      /* Which types of dependencies should we use here is,
4671
         generally, machine-dependent question...  But, for now,
4672
         it is not.  */
4673
 
4674
      if (!mutate_p)
4675
        {
4676
          init_dep (new_dep, insn, check, REG_DEP_TRUE);
4677
          sd_add_dep (new_dep, false);
4678
 
4679
          init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
4680
          sd_add_dep (new_dep, false);
4681
        }
4682
      else
4683
        {
4684
          if (spec_info->dump)
4685
            fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
4686
                     (*current_sched_info->print_insn) (insn, 0));
4687
 
4688
          /* Remove all dependencies of the INSN.  */
4689
          {
4690
            sd_it = sd_iterator_start (insn, (SD_LIST_FORW
4691
                                              | SD_LIST_BACK
4692
                                              | SD_LIST_RES_BACK));
4693
            while (sd_iterator_cond (&sd_it, &dep))
4694
              sd_delete_dep (sd_it);
4695
          }
4696
 
4697
          /* If former check (INSN) already was moved to the ready (or queue)
4698
             list, add new check (CHECK) there too.  */
4699
          if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
4700
            try_ready (check);
4701
 
4702
          /* Remove old check from instruction stream and free its
4703
             data.  */
4704
          sched_remove_insn (insn);
4705
        }
4706
 
4707
      init_dep (new_dep, check, twin, REG_DEP_ANTI);
4708
      sd_add_dep (new_dep, false);
4709
    }
4710
  else
4711
    {
4712
      init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
4713
      sd_add_dep (new_dep, false);
4714
    }
4715
 
4716
  if (!mutate_p)
4717
    /* Fix priorities.  If MUTATE_P is nonzero, this is not necessary,
4718
       because it'll be done later in add_to_speculative_block.  */
4719
    {
4720
      rtx_vec_t priorities_roots = NULL;
4721
 
4722
      clear_priorities (twin, &priorities_roots);
4723
      calc_priorities (priorities_roots);
4724
      VEC_free (rtx, heap, priorities_roots);
4725
    }
4726
}
4727
 
4728
/* Removes dependency between instructions in the recovery block REC
4729
   and usual region instructions.  It keeps inner dependences so it
4730
   won't be necessary to recompute them.  */
4731
static void
4732
fix_recovery_deps (basic_block rec)
4733
{
4734
  rtx note, insn, jump, ready_list = 0;
4735
  bitmap_head in_ready;
4736
  rtx link;
4737
 
4738
  bitmap_initialize (&in_ready, 0);
4739
 
4740
  /* NOTE - a basic block note.  */
4741
  note = NEXT_INSN (BB_HEAD (rec));
4742
  gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4743
  insn = BB_END (rec);
4744
  gcc_assert (JUMP_P (insn));
4745
  insn = PREV_INSN (insn);
4746
 
4747
  do
4748
    {
4749
      sd_iterator_def sd_it;
4750
      dep_t dep;
4751
 
4752
      for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
4753
           sd_iterator_cond (&sd_it, &dep);)
4754
        {
4755
          rtx consumer = DEP_CON (dep);
4756
 
4757
          if (BLOCK_FOR_INSN (consumer) != rec)
4758
            {
4759
              sd_delete_dep (sd_it);
4760
 
4761
              if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
4762
                {
4763
                  ready_list = alloc_INSN_LIST (consumer, ready_list);
4764
                  bitmap_set_bit (&in_ready, INSN_LUID (consumer));
4765
                }
4766
            }
4767
          else
4768
            {
4769
              gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
4770
 
4771
              sd_iterator_next (&sd_it);
4772
            }
4773
        }
4774
 
4775
      insn = PREV_INSN (insn);
4776
    }
4777
  while (insn != note);
4778
 
4779
  bitmap_clear (&in_ready);
4780
 
4781
  /* Try to add instructions to the ready or queue list.  */
4782
  for (link = ready_list; link; link = XEXP (link, 1))
4783
    try_ready (XEXP (link, 0));
4784
  free_INSN_LIST_list (&ready_list);
4785
 
4786
  /* Fixing jump's dependences.  */
4787
  insn = BB_HEAD (rec);
4788
  jump = BB_END (rec);
4789
 
4790
  gcc_assert (LABEL_P (insn));
4791
  insn = NEXT_INSN (insn);
4792
 
4793
  gcc_assert (NOTE_INSN_BASIC_BLOCK_P (insn));
4794
  add_jump_dependencies (insn, jump);
4795
}
4796
 
4797
/* Change pattern of INSN to NEW_PAT.  */
4798
void
4799
sched_change_pattern (rtx insn, rtx new_pat)
4800
{
4801
  int t;
4802
 
4803
  t = validate_change (insn, &PATTERN (insn), new_pat, 0);
4804
  gcc_assert (t);
4805
  dfa_clear_single_insn_cache (insn);
4806
}
4807
 
4808
/* Change pattern of INSN to NEW_PAT.  Invalidate cached haifa
4809
   instruction data.  */
4810
static void
4811
haifa_change_pattern (rtx insn, rtx new_pat)
4812
{
4813
  sched_change_pattern (insn, new_pat);
4814
 
4815
  /* Invalidate INSN_COST, so it'll be recalculated.  */
4816
  INSN_COST (insn) = -1;
4817
  /* Invalidate INSN_TICK, so it'll be recalculated.  */
4818
  INSN_TICK (insn) = INVALID_TICK;
4819
}
4820
 
4821
/* -1 - can't speculate,
4822
 
4823
   current instruction pattern,
4824
   1 - need to change pattern for *NEW_PAT to be speculative.  */
4825
int
4826
sched_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4827
{
4828
  gcc_assert (current_sched_info->flags & DO_SPECULATION
4829
              && (request & SPECULATIVE)
4830
              && sched_insn_is_legitimate_for_speculation_p (insn, request));
4831
 
4832
  if ((request & spec_info->mask) != request)
4833
    return -1;
4834
 
4835
  if (request & BE_IN_SPEC
4836
      && !(request & BEGIN_SPEC))
4837
    return 0;
4838
 
4839
  return targetm.sched.speculate_insn (insn, request, new_pat);
4840
}
4841
 
4842
static int
4843
haifa_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
4844
{
4845
  gcc_assert (sched_deps_info->generate_spec_deps
4846
              && !IS_SPECULATION_CHECK_P (insn));
4847
 
4848
  if (HAS_INTERNAL_DEP (insn)
4849
      || SCHED_GROUP_P (insn))
4850
    return -1;
4851
 
4852
  return sched_speculate_insn (insn, request, new_pat);
4853
}
4854
 
4855
/* Print some information about block BB, which starts with HEAD and
4856
   ends with TAIL, before scheduling it.
4857
   I is zero, if scheduler is about to start with the fresh ebb.  */
4858
static void
4859
dump_new_block_header (int i, basic_block bb, rtx head, rtx tail)
4860
{
4861
  if (!i)
4862
    fprintf (sched_dump,
4863
             ";;   ======================================================\n");
4864
  else
4865
    fprintf (sched_dump,
4866
             ";;   =====================ADVANCING TO=====================\n");
4867
  fprintf (sched_dump,
4868
           ";;   -- basic block %d from %d to %d -- %s reload\n",
4869
           bb->index, INSN_UID (head), INSN_UID (tail),
4870
           (reload_completed ? "after" : "before"));
4871
  fprintf (sched_dump,
4872
           ";;   ======================================================\n");
4873
  fprintf (sched_dump, "\n");
4874
}
4875
 
4876
/* Unlink basic block notes and labels and saves them, so they
4877
   can be easily restored.  We unlink basic block notes in EBB to
4878
   provide back-compatibility with the previous code, as target backends
4879
   assume, that there'll be only instructions between
4880
   current_sched_info->{head and tail}.  We restore these notes as soon
4881
   as we can.
4882
   FIRST (LAST) is the first (last) basic block in the ebb.
4883
   NB: In usual case (FIRST == LAST) nothing is really done.  */
4884
void
4885
unlink_bb_notes (basic_block first, basic_block last)
4886
{
4887
  /* We DON'T unlink basic block notes of the first block in the ebb.  */
4888
  if (first == last)
4889
    return;
4890
 
4891
  bb_header = XNEWVEC (rtx, last_basic_block);
4892
 
4893
  /* Make a sentinel.  */
4894
  if (last->next_bb != EXIT_BLOCK_PTR)
4895
    bb_header[last->next_bb->index] = 0;
4896
 
4897
  first = first->next_bb;
4898
  do
4899
    {
4900
      rtx prev, label, note, next;
4901
 
4902
      label = BB_HEAD (last);
4903
      if (LABEL_P (label))
4904
        note = NEXT_INSN (label);
4905
      else
4906
        note = label;
4907
      gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4908
 
4909
      prev = PREV_INSN (label);
4910
      next = NEXT_INSN (note);
4911
      gcc_assert (prev && next);
4912
 
4913
      NEXT_INSN (prev) = next;
4914
      PREV_INSN (next) = prev;
4915
 
4916
      bb_header[last->index] = label;
4917
 
4918
      if (last == first)
4919
        break;
4920
 
4921
      last = last->prev_bb;
4922
    }
4923
  while (1);
4924
}
4925
 
4926
/* Restore basic block notes.
4927
   FIRST is the first basic block in the ebb.  */
4928
static void
4929
restore_bb_notes (basic_block first)
4930
{
4931
  if (!bb_header)
4932
    return;
4933
 
4934
  /* We DON'T unlink basic block notes of the first block in the ebb.  */
4935
  first = first->next_bb;
4936
  /* Remember: FIRST is actually a second basic block in the ebb.  */
4937
 
4938
  while (first != EXIT_BLOCK_PTR
4939
         && bb_header[first->index])
4940
    {
4941
      rtx prev, label, note, next;
4942
 
4943
      label = bb_header[first->index];
4944
      prev = PREV_INSN (label);
4945
      next = NEXT_INSN (prev);
4946
 
4947
      if (LABEL_P (label))
4948
        note = NEXT_INSN (label);
4949
      else
4950
        note = label;
4951
      gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4952
 
4953
      bb_header[first->index] = 0;
4954
 
4955
      NEXT_INSN (prev) = label;
4956
      NEXT_INSN (note) = next;
4957
      PREV_INSN (next) = note;
4958
 
4959
      first = first->next_bb;
4960
    }
4961
 
4962
  free (bb_header);
4963
  bb_header = 0;
4964
}
4965
 
4966
/* Helper function.
4967
   Fix CFG after both in- and inter-block movement of
4968
   control_flow_insn_p JUMP.  */
4969
static void
4970
fix_jump_move (rtx jump)
4971
{
4972
  basic_block bb, jump_bb, jump_bb_next;
4973
 
4974
  bb = BLOCK_FOR_INSN (PREV_INSN (jump));
4975
  jump_bb = BLOCK_FOR_INSN (jump);
4976
  jump_bb_next = jump_bb->next_bb;
4977
 
4978
  gcc_assert (common_sched_info->sched_pass_id == SCHED_EBB_PASS
4979
              || IS_SPECULATION_BRANCHY_CHECK_P (jump));
4980
 
4981
  if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
4982
    /* if jump_bb_next is not empty.  */
4983
    BB_END (jump_bb) = BB_END (jump_bb_next);
4984
 
4985
  if (BB_END (bb) != PREV_INSN (jump))
4986
    /* Then there are instruction after jump that should be placed
4987
       to jump_bb_next.  */
4988
    BB_END (jump_bb_next) = BB_END (bb);
4989
  else
4990
    /* Otherwise jump_bb_next is empty.  */
4991
    BB_END (jump_bb_next) = NEXT_INSN (BB_HEAD (jump_bb_next));
4992
 
4993
  /* To make assertion in move_insn happy.  */
4994
  BB_END (bb) = PREV_INSN (jump);
4995
 
4996
  update_bb_for_insn (jump_bb_next);
4997
}
4998
 
4999
/* Fix CFG after interblock movement of control_flow_insn_p JUMP.  */
5000
static void
5001
move_block_after_check (rtx jump)
5002
{
5003
  basic_block bb, jump_bb, jump_bb_next;
5004
  VEC(edge,gc) *t;
5005
 
5006
  bb = BLOCK_FOR_INSN (PREV_INSN (jump));
5007
  jump_bb = BLOCK_FOR_INSN (jump);
5008
  jump_bb_next = jump_bb->next_bb;
5009
 
5010
  update_bb_for_insn (jump_bb);
5011
 
5012
  gcc_assert (IS_SPECULATION_CHECK_P (jump)
5013
              || IS_SPECULATION_CHECK_P (BB_END (jump_bb_next)));
5014
 
5015
  unlink_block (jump_bb_next);
5016
  link_block (jump_bb_next, bb);
5017
 
5018
  t = bb->succs;
5019
  bb->succs = 0;
5020
  move_succs (&(jump_bb->succs), bb);
5021
  move_succs (&(jump_bb_next->succs), jump_bb);
5022
  move_succs (&t, jump_bb_next);
5023
 
5024
  df_mark_solutions_dirty ();
5025
 
5026
  common_sched_info->fix_recovery_cfg
5027
    (bb->index, jump_bb->index, jump_bb_next->index);
5028
}
5029
 
5030
/* Helper function for move_block_after_check.
5031
   This functions attaches edge vector pointed to by SUCCSP to
5032
   block TO.  */
5033
static void
5034
move_succs (VEC(edge,gc) **succsp, basic_block to)
5035
{
5036
  edge e;
5037
  edge_iterator ei;
5038
 
5039
  gcc_assert (to->succs == 0);
5040
 
5041
  to->succs = *succsp;
5042
 
5043
  FOR_EACH_EDGE (e, ei, to->succs)
5044
    e->src = to;
5045
 
5046
  *succsp = 0;
5047
}
5048
 
5049
/* Remove INSN from the instruction stream.
5050
   INSN should have any dependencies.  */
5051
static void
5052
sched_remove_insn (rtx insn)
5053
{
5054
  sd_finish_insn (insn);
5055
 
5056
  change_queue_index (insn, QUEUE_NOWHERE);
5057
  current_sched_info->add_remove_insn (insn, 1);
5058
  remove_insn (insn);
5059
}
5060
 
5061
/* Clear priorities of all instructions, that are forward dependent on INSN.
5062
   Store in vector pointed to by ROOTS_PTR insns on which priority () should
5063
   be invoked to initialize all cleared priorities.  */
5064
static void
5065
clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
5066
{
5067
  sd_iterator_def sd_it;
5068
  dep_t dep;
5069
  bool insn_is_root_p = true;
5070
 
5071
  gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
5072
 
5073
  FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
5074
    {
5075
      rtx pro = DEP_PRO (dep);
5076
 
5077
      if (INSN_PRIORITY_STATUS (pro) >= 0
5078
          && QUEUE_INDEX (insn) != QUEUE_SCHEDULED)
5079
        {
5080
          /* If DEP doesn't contribute to priority then INSN itself should
5081
             be added to priority roots.  */
5082
          if (contributes_to_priority_p (dep))
5083
            insn_is_root_p = false;
5084
 
5085
          INSN_PRIORITY_STATUS (pro) = -1;
5086
          clear_priorities (pro, roots_ptr);
5087
        }
5088
    }
5089
 
5090
  if (insn_is_root_p)
5091
    VEC_safe_push (rtx, heap, *roots_ptr, insn);
5092
}
5093
 
5094
/* Recompute priorities of instructions, whose priorities might have been
5095
   changed.  ROOTS is a vector of instructions whose priority computation will
5096
   trigger initialization of all cleared priorities.  */
5097
static void
5098
calc_priorities (rtx_vec_t roots)
5099
{
5100
  int i;
5101
  rtx insn;
5102
 
5103
  for (i = 0; VEC_iterate (rtx, roots, i, insn); i++)
5104
    priority (insn);
5105
}
5106
 
5107
 
5108
/* Add dependences between JUMP and other instructions in the recovery
5109
   block.  INSN is the first insn the recovery block.  */
5110
static void
5111
add_jump_dependencies (rtx insn, rtx jump)
5112
{
5113
  do
5114
    {
5115
      insn = NEXT_INSN (insn);
5116
      if (insn == jump)
5117
        break;
5118
 
5119
      if (dep_list_size (insn) == 0)
5120
        {
5121
          dep_def _new_dep, *new_dep = &_new_dep;
5122
 
5123
          init_dep (new_dep, insn, jump, REG_DEP_ANTI);
5124
          sd_add_dep (new_dep, false);
5125
        }
5126
    }
5127
  while (1);
5128
 
5129
  gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
5130
}
5131
 
5132
/* Return the NOTE_INSN_BASIC_BLOCK of BB.  */
5133
rtx
5134
bb_note (basic_block bb)
5135
{
5136
  rtx note;
5137
 
5138
  note = BB_HEAD (bb);
5139
  if (LABEL_P (note))
5140
    note = NEXT_INSN (note);
5141
 
5142
  gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
5143
  return note;
5144
}
5145
 
5146
#ifdef ENABLE_CHECKING
5147
/* Helper function for check_cfg.
5148
   Return nonzero, if edge vector pointed to by EL has edge with TYPE in
5149
   its flags.  */
5150
static int
5151
has_edge_p (VEC(edge,gc) *el, int type)
5152
{
5153
  edge e;
5154
  edge_iterator ei;
5155
 
5156
  FOR_EACH_EDGE (e, ei, el)
5157
    if (e->flags & type)
5158
      return 1;
5159
  return 0;
5160
}
5161
 
5162
/* Search back, starting at INSN, for an insn that is not a
5163
   NOTE_INSN_VAR_LOCATION.  Don't search beyond HEAD, and return it if
5164
   no such insn can be found.  */
5165
static inline rtx
5166
prev_non_location_insn (rtx insn, rtx head)
5167
{
5168
  while (insn != head && NOTE_P (insn)
5169
         && NOTE_KIND (insn) == NOTE_INSN_VAR_LOCATION)
5170
    insn = PREV_INSN (insn);
5171
 
5172
  return insn;
5173
}
5174
 
5175
/* Check few properties of CFG between HEAD and TAIL.
5176
   If HEAD (TAIL) is NULL check from the beginning (till the end) of the
5177
   instruction stream.  */
5178
static void
5179
check_cfg (rtx head, rtx tail)
5180
{
5181
  rtx next_tail;
5182
  basic_block bb = 0;
5183
  int not_first = 0, not_last;
5184
 
5185
  if (head == NULL)
5186
    head = get_insns ();
5187
  if (tail == NULL)
5188
    tail = get_last_insn ();
5189
  next_tail = NEXT_INSN (tail);
5190
 
5191
  do
5192
    {
5193
      not_last = head != tail;
5194
 
5195
      if (not_first)
5196
        gcc_assert (NEXT_INSN (PREV_INSN (head)) == head);
5197
      if (not_last)
5198
        gcc_assert (PREV_INSN (NEXT_INSN (head)) == head);
5199
 
5200
      if (LABEL_P (head)
5201
          || (NOTE_INSN_BASIC_BLOCK_P (head)
5202
              && (!not_first
5203
                  || (not_first && !LABEL_P (PREV_INSN (head))))))
5204
        {
5205
          gcc_assert (bb == 0);
5206
          bb = BLOCK_FOR_INSN (head);
5207
          if (bb != 0)
5208
            gcc_assert (BB_HEAD (bb) == head);
5209
          else
5210
            /* This is the case of jump table.  See inside_basic_block_p ().  */
5211
            gcc_assert (LABEL_P (head) && !inside_basic_block_p (head));
5212
        }
5213
 
5214
      if (bb == 0)
5215
        {
5216
          gcc_assert (!inside_basic_block_p (head));
5217
          head = NEXT_INSN (head);
5218
        }
5219
      else
5220
        {
5221
          gcc_assert (inside_basic_block_p (head)
5222
                      || NOTE_P (head));
5223
          gcc_assert (BLOCK_FOR_INSN (head) == bb);
5224
 
5225
          if (LABEL_P (head))
5226
            {
5227
              head = NEXT_INSN (head);
5228
              gcc_assert (NOTE_INSN_BASIC_BLOCK_P (head));
5229
            }
5230
          else
5231
            {
5232
              if (control_flow_insn_p (head))
5233
                {
5234
                  gcc_assert (prev_non_location_insn (BB_END (bb), head)
5235
                              == head);
5236
 
5237
                  if (any_uncondjump_p (head))
5238
                    gcc_assert (EDGE_COUNT (bb->succs) == 1
5239
                                && BARRIER_P (NEXT_INSN (head)));
5240
                  else if (any_condjump_p (head))
5241
                    gcc_assert (/* Usual case.  */
5242
                                (EDGE_COUNT (bb->succs) > 1
5243
                                 && !BARRIER_P (NEXT_INSN (head)))
5244
                                /* Or jump to the next instruction.  */
5245
                                || (EDGE_COUNT (bb->succs) == 1
5246
                                    && (BB_HEAD (EDGE_I (bb->succs, 0)->dest)
5247
                                        == JUMP_LABEL (head))));
5248
                }
5249
              if (BB_END (bb) == head)
5250
                {
5251
                  if (EDGE_COUNT (bb->succs) > 1)
5252
                    gcc_assert (control_flow_insn_p (prev_non_location_insn
5253
                                                     (head, BB_HEAD (bb)))
5254
                                || has_edge_p (bb->succs, EDGE_COMPLEX));
5255
                  bb = 0;
5256
                }
5257
 
5258
              head = NEXT_INSN (head);
5259
            }
5260
        }
5261
 
5262
      not_first = 1;
5263
    }
5264
  while (head != next_tail);
5265
 
5266
  gcc_assert (bb == 0);
5267
}
5268
 
5269
#endif /* ENABLE_CHECKING */
5270
 
5271
/* Extend per basic block data structures.  */
5272
static void
5273
extend_bb (void)
5274
{
5275
  if (sched_scan_info->extend_bb)
5276
    sched_scan_info->extend_bb ();
5277
}
5278
 
5279
/* Init data for BB.  */
5280
static void
5281
init_bb (basic_block bb)
5282
{
5283
  if (sched_scan_info->init_bb)
5284
    sched_scan_info->init_bb (bb);
5285
}
5286
 
5287
/* Extend per insn data structures.  */
5288
static void
5289
extend_insn (void)
5290
{
5291
  if (sched_scan_info->extend_insn)
5292
    sched_scan_info->extend_insn ();
5293
}
5294
 
5295
/* Init data structures for INSN.  */
5296
static void
5297
init_insn (rtx insn)
5298
{
5299
  if (sched_scan_info->init_insn)
5300
    sched_scan_info->init_insn (insn);
5301
}
5302
 
5303
/* Init all insns in BB.  */
5304
static void
5305
init_insns_in_bb (basic_block bb)
5306
{
5307
  rtx insn;
5308
 
5309
  FOR_BB_INSNS (bb, insn)
5310
    init_insn (insn);
5311
}
5312
 
5313
/* A driver function to add a set of basic blocks (BBS),
5314
   a single basic block (BB), a set of insns (INSNS) or a single insn (INSN)
5315
   to the scheduling region.  */
5316
void
5317
sched_scan (const struct sched_scan_info_def *ssi,
5318
            bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5319
{
5320
  sched_scan_info = ssi;
5321
 
5322
  if (bbs != NULL || bb != NULL)
5323
    {
5324
      extend_bb ();
5325
 
5326
      if (bbs != NULL)
5327
        {
5328
          unsigned i;
5329
          basic_block x;
5330
 
5331
          for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
5332
            init_bb (x);
5333
        }
5334
 
5335
      if (bb != NULL)
5336
        init_bb (bb);
5337
    }
5338
 
5339
  extend_insn ();
5340
 
5341
  if (bbs != NULL)
5342
    {
5343
      unsigned i;
5344
      basic_block x;
5345
 
5346
      for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
5347
        init_insns_in_bb (x);
5348
    }
5349
 
5350
  if (bb != NULL)
5351
    init_insns_in_bb (bb);
5352
 
5353
  if (insns != NULL)
5354
    {
5355
      unsigned i;
5356
      rtx x;
5357
 
5358
      for (i = 0; VEC_iterate (rtx, insns, i, x); i++)
5359
        init_insn (x);
5360
    }
5361
 
5362
  if (insn != NULL)
5363
    init_insn (insn);
5364
}
5365
 
5366
 
5367
/* Extend data structures for logical insn UID.  */
5368
static void
5369
luids_extend_insn (void)
5370
{
5371
  int new_luids_max_uid = get_max_uid () + 1;
5372
 
5373
  VEC_safe_grow_cleared (int, heap, sched_luids, new_luids_max_uid);
5374
}
5375
 
5376
/* Initialize LUID for INSN.  */
5377
static void
5378
luids_init_insn (rtx insn)
5379
{
5380
  int i = INSN_P (insn) ? 1 : common_sched_info->luid_for_non_insn (insn);
5381
  int luid;
5382
 
5383
  if (i >= 0)
5384
    {
5385
      luid = sched_max_luid;
5386
      sched_max_luid += i;
5387
    }
5388
  else
5389
    luid = -1;
5390
 
5391
  SET_INSN_LUID (insn, luid);
5392
}
5393
 
5394
/* Initialize luids for BBS, BB, INSNS and INSN.
5395
   The hook common_sched_info->luid_for_non_insn () is used to determine
5396
   if notes, labels, etc. need luids.  */
5397
void
5398
sched_init_luids (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5399
{
5400
  const struct sched_scan_info_def ssi =
5401
    {
5402
      NULL, /* extend_bb */
5403
      NULL, /* init_bb */
5404
      luids_extend_insn, /* extend_insn */
5405
      luids_init_insn /* init_insn */
5406
    };
5407
 
5408
  sched_scan (&ssi, bbs, bb, insns, insn);
5409
}
5410
 
5411
/* Free LUIDs.  */
5412
void
5413
sched_finish_luids (void)
5414
{
5415
  VEC_free (int, heap, sched_luids);
5416
  sched_max_luid = 1;
5417
}
5418
 
5419
/* Return logical uid of INSN.  Helpful while debugging.  */
5420
int
5421
insn_luid (rtx insn)
5422
{
5423
  return INSN_LUID (insn);
5424
}
5425
 
5426
/* Extend per insn data in the target.  */
5427
void
5428
sched_extend_target (void)
5429
{
5430
  if (targetm.sched.h_i_d_extended)
5431
    targetm.sched.h_i_d_extended ();
5432
}
5433
 
5434
/* Extend global scheduler structures (those, that live across calls to
5435
   schedule_block) to include information about just emitted INSN.  */
5436
static void
5437
extend_h_i_d (void)
5438
{
5439
  int reserve = (get_max_uid () + 1
5440
                 - VEC_length (haifa_insn_data_def, h_i_d));
5441
  if (reserve > 0
5442
      && ! VEC_space (haifa_insn_data_def, h_i_d, reserve))
5443
    {
5444
      VEC_safe_grow_cleared (haifa_insn_data_def, heap, h_i_d,
5445
                             3 * get_max_uid () / 2);
5446
      sched_extend_target ();
5447
    }
5448
}
5449
 
5450
/* Initialize h_i_d entry of the INSN with default values.
5451
   Values, that are not explicitly initialized here, hold zero.  */
5452
static void
5453
init_h_i_d (rtx insn)
5454
{
5455
  if (INSN_LUID (insn) > 0)
5456
    {
5457
      INSN_COST (insn) = -1;
5458
      QUEUE_INDEX (insn) = QUEUE_NOWHERE;
5459
      INSN_TICK (insn) = INVALID_TICK;
5460
      INTER_TICK (insn) = INVALID_TICK;
5461
      TODO_SPEC (insn) = HARD_DEP;
5462
    }
5463
}
5464
 
5465
/* Initialize haifa_insn_data for BBS, BB, INSNS and INSN.  */
5466
void
5467
haifa_init_h_i_d (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
5468
{
5469
  const struct sched_scan_info_def ssi =
5470
    {
5471
      NULL, /* extend_bb */
5472
      NULL, /* init_bb */
5473
      extend_h_i_d, /* extend_insn */
5474
      init_h_i_d /* init_insn */
5475
    };
5476
 
5477
  sched_scan (&ssi, bbs, bb, insns, insn);
5478
}
5479
 
5480
/* Finalize haifa_insn_data.  */
5481
void
5482
haifa_finish_h_i_d (void)
5483
{
5484
  int i;
5485
  haifa_insn_data_t data;
5486
  struct reg_use_data *use, *next;
5487
 
5488
  for (i = 0; VEC_iterate (haifa_insn_data_def, h_i_d, i, data); i++)
5489
    {
5490
      if (data->reg_pressure != NULL)
5491
        free (data->reg_pressure);
5492
      for (use = data->reg_use_list; use != NULL; use = next)
5493
        {
5494
          next = use->next_insn_use;
5495
          free (use);
5496
        }
5497
    }
5498
  VEC_free (haifa_insn_data_def, heap, h_i_d);
5499
}
5500
 
5501
/* Init data for the new insn INSN.  */
5502
static void
5503
haifa_init_insn (rtx insn)
5504
{
5505
  gcc_assert (insn != NULL);
5506
 
5507
  sched_init_luids (NULL, NULL, NULL, insn);
5508
  sched_extend_target ();
5509
  sched_deps_init (false);
5510
  haifa_init_h_i_d (NULL, NULL, NULL, insn);
5511
 
5512
  if (adding_bb_to_current_region_p)
5513
    {
5514
      sd_init_insn (insn);
5515
 
5516
      /* Extend dependency caches by one element.  */
5517
      extend_dependency_caches (1, false);
5518
    }
5519
}
5520
 
5521
/* Init data for the new basic block BB which comes after AFTER.  */
5522
static void
5523
haifa_init_only_bb (basic_block bb, basic_block after)
5524
{
5525
  gcc_assert (bb != NULL);
5526
 
5527
  sched_init_bbs ();
5528
 
5529
  if (common_sched_info->add_block)
5530
    /* This changes only data structures of the front-end.  */
5531
    common_sched_info->add_block (bb, after);
5532
}
5533
 
5534
/* A generic version of sched_split_block ().  */
5535
basic_block
5536
sched_split_block_1 (basic_block first_bb, rtx after)
5537
{
5538
  edge e;
5539
 
5540
  e = split_block (first_bb, after);
5541
  gcc_assert (e->src == first_bb);
5542
 
5543
  /* sched_split_block emits note if *check == BB_END.  Probably it
5544
     is better to rip that note off.  */
5545
 
5546
  return e->dest;
5547
}
5548
 
5549
/* A generic version of sched_create_empty_bb ().  */
5550
basic_block
5551
sched_create_empty_bb_1 (basic_block after)
5552
{
5553
  return create_empty_bb (after);
5554
}
5555
 
5556
/* Insert PAT as an INSN into the schedule and update the necessary data
5557
   structures to account for it. */
5558
rtx
5559
sched_emit_insn (rtx pat)
5560
{
5561
  rtx insn = emit_insn_after (pat, last_scheduled_insn);
5562
  last_scheduled_insn = insn;
5563
  haifa_init_insn (insn);
5564
  return insn;
5565
}
5566
 
5567
#endif /* INSN_SCHEDULING */

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.