OpenCores
URL https://opencores.org/ocsvn/openrisc/openrisc/trunk

Subversion Repositories openrisc

[/] [openrisc/] [trunk/] [gnu-dev/] [or1k-gcc/] [libgomp/] [libgomp.texi] - Blame information for rev 735

Details | Compare with Previous | View Log

Line No. Rev Author Line
1 735 jeremybenn
\input texinfo @c -*-texinfo-*-
2
 
3
@c %**start of header
4
@setfilename libgomp.info
5
@settitle GNU libgomp
6
@c %**end of header
7
 
8
 
9
@copying
10
Copyright @copyright{} 2006, 2007, 2008, 2010, 2011 Free Software Foundation, Inc.
11
 
12
Permission is granted to copy, distribute and/or modify this document
13
under the terms of the GNU Free Documentation License, Version 1.3 or
14
any later version published by the Free Software Foundation; with the
15
Invariant Sections being ``Funding Free Software'', the Front-Cover
16
texts being (a) (see below), and with the Back-Cover Texts being (b)
17
(see below).  A copy of the license is included in the section entitled
18
``GNU Free Documentation License''.
19
 
20
(a) The FSF's Front-Cover Text is:
21
 
22
     A GNU Manual
23
 
24
(b) The FSF's Back-Cover Text is:
25
 
26
     You have freedom to copy and modify this GNU Manual, like GNU
27
     software.  Copies published by the Free Software Foundation raise
28
     funds for GNU development.
29
@end copying
30
 
31
@ifinfo
32
@dircategory GNU Libraries
33
@direntry
34
* libgomp: (libgomp).                    GNU OpenMP runtime library
35
@end direntry
36
 
37
This manual documents the GNU implementation of the OpenMP API for
38
multi-platform shared-memory parallel programming in C/C++ and Fortran.
39
 
40
Published by the Free Software Foundation
41
51 Franklin Street, Fifth Floor
42
Boston, MA 02110-1301 USA
43
 
44
@insertcopying
45
@end ifinfo
46
 
47
 
48
@setchapternewpage odd
49
 
50
@titlepage
51
@title The GNU OpenMP Implementation
52
@page
53
@vskip 0pt plus 1filll
54
@comment For the @value{version-GCC} Version*
55
@sp 1
56
Published by the Free Software Foundation @*
57
51 Franklin Street, Fifth Floor@*
58
Boston, MA 02110-1301, USA@*
59
@sp 1
60
@insertcopying
61
@end titlepage
62
 
63
@summarycontents
64
@contents
65
@page
66
 
67
 
68
@node Top
69
@top Introduction
70
@cindex Introduction
71
 
72
This manual documents the usage of libgomp, the GNU implementation of the
73
@uref{http://www.openmp.org, OpenMP} Application Programming Interface (API)
74
for multi-platform shared-memory parallel programming in C/C++ and Fortran.
75
 
76
 
77
 
78
@comment
79
@comment  When you add a new menu item, please keep the right hand
80
@comment  aligned to the same column.  Do not use tabs.  This provides
81
@comment  better formatting.
82
@comment
83
@menu
84
* Enabling OpenMP::            How to enable OpenMP for your applications.
85
* Runtime Library Routines::   The OpenMP runtime application programming
86
                               interface.
87
* Environment Variables::      Influencing runtime behavior with environment
88
                               variables.
89
* The libgomp ABI::            Notes on the external ABI presented by libgomp.
90
* Reporting Bugs::             How to report bugs in GNU OpenMP.
91
* Copying::                    GNU general public license says
92
                               how you can copy and share libgomp.
93
* GNU Free Documentation License::
94
                               How you can copy and share this manual.
95
* Funding::                    How to help assure continued work for free
96
                               software.
97
* Index::                      Index of this documentation.
98
@end menu
99
 
100
 
101
@c ---------------------------------------------------------------------
102
@c Enabling OpenMP
103
@c ---------------------------------------------------------------------
104
 
105
@node Enabling OpenMP
106
@chapter Enabling OpenMP
107
 
108
To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
109
flag @command{-fopenmp} must be specified. This enables the OpenMP directive
110
@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
111
@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
112
@code{!$} conditional compilation sentinels in free form and @code{c$},
113
@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
114
arranges for automatic linking of the OpenMP runtime library
115
(@ref{Runtime Library Routines}).
116
 
117
A complete description of all OpenMP directives accepted may be found in
118
the @uref{http://www.openmp.org, OpenMP Application Program Interface} manual,
119
version 3.1.
120
 
121
 
122
@c ---------------------------------------------------------------------
123
@c Runtime Library Routines
124
@c ---------------------------------------------------------------------
125
 
126
@node Runtime Library Routines
127
@chapter Runtime Library Routines
128
 
129
The runtime routines described here are defined by section 3 of the OpenMP
130
specifications in version 3.1. The routines are structured in following
131
three parts:
132
 
133
Control threads, processors and the parallel environment.
134
 
135
@menu
136
* omp_get_active_level::        Number of active parallel regions
137
* omp_get_ancestor_thread_num:: Ancestor thread ID
138
* omp_get_dynamic::             Dynamic teams setting
139
* omp_get_level::               Number of parallel regions
140
* omp_get_max_active_levels::   Maximum number of active regions
141
* omp_get_max_threads::         Maximum number of threads of parallel region
142
* omp_get_nested::              Nested parallel regions
143
* omp_get_num_procs::           Number of processors online
144
* omp_get_num_threads::         Size of the active team
145
* omp_get_schedule::            Obtain the runtime scheduling method
146
* omp_get_team_size::           Number of threads in a team
147
* omp_get_thread_limit::        Maximum number of threads
148
* omp_get_thread_num::          Current thread ID
149
* omp_in_parallel::             Whether a parallel region is active
150
* omp_in_final::                Whether in final or included task region
151
* omp_set_dynamic::             Enable/disable dynamic teams
152
* omp_set_max_active_levels::   Limits the number of active parallel regions
153
* omp_set_nested::              Enable/disable nested parallel regions
154
* omp_set_num_threads::         Set upper team size limit
155
* omp_set_schedule::            Set the runtime scheduling method
156
@end menu
157
 
158
Initialize, set, test, unset and destroy simple and nested locks.
159
 
160
@menu
161
* omp_init_lock::            Initialize simple lock
162
* omp_set_lock::             Wait for and set simple lock
163
* omp_test_lock::            Test and set simple lock if available
164
* omp_unset_lock::           Unset simple lock
165
* omp_destroy_lock::         Destroy simple lock
166
* omp_init_nest_lock::       Initialize nested lock
167
* omp_set_nest_lock::        Wait for and set simple lock
168
* omp_test_nest_lock::       Test and set nested lock if available
169
* omp_unset_nest_lock::      Unset nested lock
170
* omp_destroy_nest_lock::    Destroy nested lock
171
@end menu
172
 
173
Portable, thread-based, wall clock timer.
174
 
175
@menu
176
* omp_get_wtick::            Get timer precision.
177
* omp_get_wtime::            Elapsed wall clock time.
178
@end menu
179
 
180
 
181
 
182
@node omp_get_active_level
183
@section @code{omp_get_active_level} -- Number of parallel regions
184
@table @asis
185
@item @emph{Description}:
186
This function returns the nesting level for the active parallel blocks,
187
which enclose the calling call.
188
 
189
@item @emph{C/C++}
190
@multitable @columnfractions .20 .80
191
@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
192
@end multitable
193
 
194
@item @emph{Fortran}:
195
@multitable @columnfractions .20 .80
196
@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
197
@end multitable
198
 
199
@item @emph{See also}:
200
@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
201
 
202
@item @emph{Reference}:
203
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.19.
204
@end table
205
 
206
 
207
 
208
@node omp_get_ancestor_thread_num
209
@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
210
@table @asis
211
@item @emph{Description}:
212
This function returns the thread identification number for the given
213
nesting level of the current thread. For values of @var{level} outside
214
zero to @code{omp_get_level} -1 is returned; if @var{level} is
215
@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
216
 
217
@item @emph{C/C++}
218
@multitable @columnfractions .20 .80
219
@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
220
@end multitable
221
 
222
@item @emph{Fortran}:
223
@multitable @columnfractions .20 .80
224
@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
225
@item                   @tab @code{integer level}
226
@end multitable
227
 
228
@item @emph{See also}:
229
@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
230
 
231
@item @emph{Reference}:
232
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.17.
233
@end table
234
 
235
 
236
 
237
@node omp_get_dynamic
238
@section @code{omp_get_dynamic} -- Dynamic teams setting
239
@table @asis
240
@item @emph{Description}:
241
This function returns @code{true} if enabled, @code{false} otherwise.
242
Here, @code{true} and @code{false} represent their language-specific
243
counterparts.
244
 
245
The dynamic team setting may be initialized at startup by the
246
@code{OMP_DYNAMIC} environment variable or at runtime using
247
@code{omp_set_dynamic}. If undefined, dynamic adjustment is
248
disabled by default.
249
 
250
@item @emph{C/C++}:
251
@multitable @columnfractions .20 .80
252
@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
253
@end multitable
254
 
255
@item @emph{Fortran}:
256
@multitable @columnfractions .20 .80
257
@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
258
@end multitable
259
 
260
@item @emph{See also}:
261
@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
262
 
263
@item @emph{Reference}:
264
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.8.
265
@end table
266
 
267
 
268
 
269
@node omp_get_level
270
@section @code{omp_get_level} -- Obtain the current nesting level
271
@table @asis
272
@item @emph{Description}:
273
This function returns the nesting level for the parallel blocks,
274
which enclose the calling call.
275
 
276
@item @emph{C/C++}
277
@multitable @columnfractions .20 .80
278
@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
279
@end multitable
280
 
281
@item @emph{Fortran}:
282
@multitable @columnfractions .20 .80
283
@item @emph{Interface}: @tab @code{integer function omp_level()}
284
@end multitable
285
 
286
@item @emph{See also}:
287
@ref{omp_get_active_level}
288
 
289
@item @emph{Reference}:
290
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.16.
291
@end table
292
 
293
 
294
 
295
@node omp_get_max_active_levels
296
@section @code{omp_get_max_active_levels} -- Maximum number of active regions
297
@table @asis
298
@item @emph{Description}:
299
This function obtains the maximum allowed number of nested, active parallel regions.
300
 
301
@item @emph{C/C++}
302
@multitable @columnfractions .20 .80
303
@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
304
@end multitable
305
 
306
@item @emph{Fortran}:
307
@multitable @columnfractions .20 .80
308
@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
309
@end multitable
310
 
311
@item @emph{See also}:
312
@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
313
 
314
@item @emph{Reference}:
315
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.15.
316
@end table
317
 
318
 
319
 
320
@node omp_get_max_threads
321
@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
322
@table @asis
323
@item @emph{Description}:
324
Return the maximum number of threads used for the current parallel region
325
that does not use the clause @code{num_threads}.
326
 
327
@item @emph{C/C++}:
328
@multitable @columnfractions .20 .80
329
@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
330
@end multitable
331
 
332
@item @emph{Fortran}:
333
@multitable @columnfractions .20 .80
334
@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
335
@end multitable
336
 
337
@item @emph{See also}:
338
@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
339
 
340
@item @emph{Reference}:
341
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.3.
342
@end table
343
 
344
 
345
 
346
@node omp_get_nested
347
@section @code{omp_get_nested} -- Nested parallel regions
348
@table @asis
349
@item @emph{Description}:
350
This function returns @code{true} if nested parallel regions are
351
enabled, @code{false} otherwise. Here, @code{true} and @code{false}
352
represent their language-specific counterparts.
353
 
354
Nested parallel regions may be initialized at startup by the
355
@code{OMP_NESTED} environment variable or at runtime using
356
@code{omp_set_nested}. If undefined, nested parallel regions are
357
disabled by default.
358
 
359
@item @emph{C/C++}:
360
@multitable @columnfractions .20 .80
361
@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
362
@end multitable
363
 
364
@item @emph{Fortran}:
365
@multitable @columnfractions .20 .80
366
@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
367
@end multitable
368
 
369
@item @emph{See also}:
370
@ref{omp_set_nested}, @ref{OMP_NESTED}
371
 
372
@item @emph{Reference}:
373
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.10.
374
@end table
375
 
376
 
377
 
378
@node omp_get_num_procs
379
@section @code{omp_get_num_procs} -- Number of processors online
380
@table @asis
381
@item @emph{Description}:
382
Returns the number of processors online.
383
 
384
@item @emph{C/C++}:
385
@multitable @columnfractions .20 .80
386
@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
387
@end multitable
388
 
389
@item @emph{Fortran}:
390
@multitable @columnfractions .20 .80
391
@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
392
@end multitable
393
 
394
@item @emph{Reference}:
395
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.5.
396
@end table
397
 
398
 
399
 
400
@node omp_get_num_threads
401
@section @code{omp_get_num_threads} -- Size of the active team
402
@table @asis
403
@item @emph{Description}:
404
Returns the number of threads in the current team. In a sequential section of
405
the program @code{omp_get_num_threads} returns 1.
406
 
407
The default team size may be initialized at startup by the
408
@code{OMP_NUM_THREADS} environment variable. At runtime, the size
409
of the current team may be set either by the @code{NUM_THREADS}
410
clause or by @code{omp_set_num_threads}. If none of the above were
411
used to define a specific value and @code{OMP_DYNAMIC} is disabled,
412
one thread per CPU online is used.
413
 
414
@item @emph{C/C++}:
415
@multitable @columnfractions .20 .80
416
@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
417
@end multitable
418
 
419
@item @emph{Fortran}:
420
@multitable @columnfractions .20 .80
421
@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
422
@end multitable
423
 
424
@item @emph{See also}:
425
@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
426
 
427
@item @emph{Reference}:
428
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.2.
429
@end table
430
 
431
 
432
 
433
@node omp_get_schedule
434
@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
435
@table @asis
436
@item @emph{Description}:
437
Obtain the runtime scheduling method. The @var{kind} argument will be
438
set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
439
@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
440
@var{modifier}, is set to the chunk size.
441
 
442
@item @emph{C/C++}
443
@multitable @columnfractions .20 .80
444
@item @emph{Prototype}: @tab @code{void omp_schedule(omp_sched_t *kind, int *modifier);}
445
@end multitable
446
 
447
@item @emph{Fortran}:
448
@multitable @columnfractions .20 .80
449
@item @emph{Interface}: @tab @code{subroutine omp_schedule(kind, modifier)}
450
@item                   @tab @code{integer(kind=omp_sched_kind) kind}
451
@item                   @tab @code{integer modifier}
452
@end multitable
453
 
454
@item @emph{See also}:
455
@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
456
 
457
@item @emph{Reference}:
458
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.12.
459
@end table
460
 
461
 
462
 
463
@node omp_get_team_size
464
@section @code{omp_get_team_size} -- Number of threads in a team
465
@table @asis
466
@item @emph{Description}:
467
This function returns the number of threads in a thread team to which
468
either the current thread or its ancestor belongs. For values of @var{level}
469
outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
470
1 is returned, and for @code{omp_get_level}, the result is identical
471
to @code{omp_get_num_threads}.
472
 
473
@item @emph{C/C++}:
474
@multitable @columnfractions .20 .80
475
@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
476
@end multitable
477
 
478
@item @emph{Fortran}:
479
@multitable @columnfractions .20 .80
480
@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
481
@item                   @tab @code{integer level}
482
@end multitable
483
 
484
@item @emph{See also}:
485
@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
486
 
487
@item @emph{Reference}:
488
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.18.
489
@end table
490
 
491
 
492
 
493
@node omp_get_thread_limit
494
@section @code{omp_get_thread_limit} -- Maximum number of threads
495
@table @asis
496
@item @emph{Description}:
497
Return the maximum number of threads of the program.
498
 
499
@item @emph{C/C++}:
500
@multitable @columnfractions .20 .80
501
@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
502
@end multitable
503
 
504
@item @emph{Fortran}:
505
@multitable @columnfractions .20 .80
506
@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
507
@end multitable
508
 
509
@item @emph{See also}:
510
@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
511
 
512
@item @emph{Reference}:
513
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.13.
514
@end table
515
 
516
 
517
 
518
@node omp_get_thread_num
519
@section @code{omp_get_thread_num} -- Current thread ID
520
@table @asis
521
@item @emph{Description}:
522
Returns a unique thread identification number within the current team.
523
In a sequential parts of the program, @code{omp_get_thread_num}
524
always returns 0. In parallel regions the return value varies
525
from 0 to @code{omp_get_num_threads}-1 inclusive. The return
526
value of the master thread of a team is always 0.
527
 
528
@item @emph{C/C++}:
529
@multitable @columnfractions .20 .80
530
@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
531
@end multitable
532
 
533
@item @emph{Fortran}:
534
@multitable @columnfractions .20 .80
535
@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
536
@end multitable
537
 
538
@item @emph{See also}:
539
@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
540
 
541
@item @emph{Reference}:
542
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.4.
543
@end table
544
 
545
 
546
 
547
@node omp_in_parallel
548
@section @code{omp_in_parallel} -- Whether a parallel region is active
549
@table @asis
550
@item @emph{Description}:
551
This function returns @code{true} if currently running in parallel,
552
@code{false} otherwise. Here, @code{true} and @code{false} represent
553
their language-specific counterparts.
554
 
555
@item @emph{C/C++}:
556
@multitable @columnfractions .20 .80
557
@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
558
@end multitable
559
 
560
@item @emph{Fortran}:
561
@multitable @columnfractions .20 .80
562
@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
563
@end multitable
564
 
565
@item @emph{Reference}:
566
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.6.
567
@end table
568
 
569
 
570
@node omp_in_final
571
@section @code{omp_in_final} -- Whether in final or included task region
572
@table @asis
573
@item @emph{Description}:
574
This function returns @code{true} if currently running in a final
575
or included task region, @code{false} otherwise. Here, @code{true}
576
and @code{false} represent their language-specific counterparts.
577
 
578
@item @emph{C/C++}:
579
@multitable @columnfractions .20 .80
580
@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
581
@end multitable
582
 
583
@item @emph{Fortran}:
584
@multitable @columnfractions .20 .80
585
@item @emph{Interface}: @tab @code{logical function omp_in_final()}
586
@end multitable
587
 
588
@item @emph{Reference}:
589
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.20.
590
@end table
591
 
592
 
593
@node omp_set_dynamic
594
@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
595
@table @asis
596
@item @emph{Description}:
597
Enable or disable the dynamic adjustment of the number of threads
598
within a team. The function takes the language-specific equivalent
599
of @code{true} and @code{false}, where @code{true} enables dynamic
600
adjustment of team sizes and @code{false} disables it.
601
 
602
@item @emph{C/C++}:
603
@multitable @columnfractions .20 .80
604
@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int set);}
605
@end multitable
606
 
607
@item @emph{Fortran}:
608
@multitable @columnfractions .20 .80
609
@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(set)}
610
@item                   @tab @code{logical, intent(in) :: set}
611
@end multitable
612
 
613
@item @emph{See also}:
614
@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
615
 
616
@item @emph{Reference}:
617
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.7.
618
@end table
619
 
620
 
621
 
622
@node omp_set_max_active_levels
623
@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
624
@table @asis
625
@item @emph{Description}:
626
This function limits the maximum allowed number of nested, active
627
parallel regions.
628
 
629
@item @emph{C/C++}
630
@multitable @columnfractions .20 .80
631
@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
632
@end multitable
633
 
634
@item @emph{Fortran}:
635
@multitable @columnfractions .20 .80
636
@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
637
@item                   @tab @code{integer max_levels}
638
@end multitable
639
 
640
@item @emph{See also}:
641
@ref{omp_get_max_active_levels}, @ref{omp_get_active_level}
642
 
643
@item @emph{Reference}:
644
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.14.
645
@end table
646
 
647
 
648
 
649
@node omp_set_nested
650
@section @code{omp_set_nested} -- Enable/disable nested parallel regions
651
@table @asis
652
@item @emph{Description}:
653
Enable or disable nested parallel regions, i.e., whether team members
654
are allowed to create new teams. The function takes the language-specific
655
equivalent of @code{true} and @code{false}, where @code{true} enables
656
dynamic adjustment of team sizes and @code{false} disables it.
657
 
658
@item @emph{C/C++}:
659
@multitable @columnfractions .20 .80
660
@item @emph{Prototype}: @tab @code{void omp_set_nested(int set);}
661
@end multitable
662
 
663
@item @emph{Fortran}:
664
@multitable @columnfractions .20 .80
665
@item @emph{Interface}: @tab @code{subroutine omp_set_nested(set)}
666
@item                   @tab @code{logical, intent(in) :: set}
667
@end multitable
668
 
669
@item @emph{See also}:
670
@ref{OMP_NESTED}, @ref{omp_get_nested}
671
 
672
@item @emph{Reference}:
673
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.9.
674
@end table
675
 
676
 
677
 
678
@node omp_set_num_threads
679
@section @code{omp_set_num_threads} -- Set upper team size limit
680
@table @asis
681
@item @emph{Description}:
682
Specifies the number of threads used by default in subsequent parallel
683
sections, if those do not specify a @code{num_threads} clause. The
684
argument of @code{omp_set_num_threads} shall be a positive integer.
685
 
686
@item @emph{C/C++}:
687
@multitable @columnfractions .20 .80
688
@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int n);}
689
@end multitable
690
 
691
@item @emph{Fortran}:
692
@multitable @columnfractions .20 .80
693
@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(n)}
694
@item                   @tab @code{integer, intent(in) :: n}
695
@end multitable
696
 
697
@item @emph{See also}:
698
@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
699
 
700
@item @emph{Reference}:
701
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.1.
702
@end table
703
 
704
 
705
 
706
@node omp_set_schedule
707
@section @code{omp_set_schedule} -- Set the runtime scheduling method
708
@table @asis
709
@item @emph{Description}:
710
Sets the runtime scheduling method. The @var{kind} argument can have the
711
value @code{omp_sched_static}, @code{omp_sched_dynamic},
712
@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
713
@code{omp_sched_auto}, the chunk size is set to the value of
714
@var{modifier} if positive, or to the default value if zero or negative.
715
For @code{omp_sched_auto} the @var{modifier} argument is ignored.
716
 
717
@item @emph{C/C++}
718
@multitable @columnfractions .20 .80
719
@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t *kind, int *modifier);}
720
@end multitable
721
 
722
@item @emph{Fortran}:
723
@multitable @columnfractions .20 .80
724
@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, modifier)}
725
@item                   @tab @code{integer(kind=omp_sched_kind) kind}
726
@item                   @tab @code{integer modifier}
727
@end multitable
728
 
729
@item @emph{See also}:
730
@ref{omp_get_schedule}
731
@ref{OMP_SCHEDULE}
732
 
733
@item @emph{Reference}:
734
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.11.
735
@end table
736
 
737
 
738
 
739
@node omp_init_lock
740
@section @code{omp_init_lock} -- Initialize simple lock
741
@table @asis
742
@item @emph{Description}:
743
Initialize a simple lock.  After initialization, the lock is in
744
an unlocked state.
745
 
746
@item @emph{C/C++}:
747
@multitable @columnfractions .20 .80
748
@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
749
@end multitable
750
 
751
@item @emph{Fortran}:
752
@multitable @columnfractions .20 .80
753
@item @emph{Interface}: @tab @code{subroutine omp_init_lock(lock)}
754
@item                   @tab @code{integer(omp_lock_kind), intent(out) :: lock}
755
@end multitable
756
 
757
@item @emph{See also}:
758
@ref{omp_destroy_lock}
759
 
760
@item @emph{Reference}:
761
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.1.
762
@end table
763
 
764
 
765
 
766
@node omp_set_lock
767
@section @code{omp_set_lock} -- Wait for and set simple lock
768
@table @asis
769
@item @emph{Description}:
770
Before setting a simple lock, the lock variable must be initialized by
771
@code{omp_init_lock}. The calling thread is blocked until the lock
772
is available. If the lock is already held by the current thread,
773
a deadlock occurs.
774
 
775
@item @emph{C/C++}:
776
@multitable @columnfractions .20 .80
777
@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
778
@end multitable
779
 
780
@item @emph{Fortran}:
781
@multitable @columnfractions .20 .80
782
@item @emph{Interface}: @tab @code{subroutine omp_set_lock(lock)}
783
@item                   @tab @code{integer(omp_lock_kind), intent(inout) :: lock}
784
@end multitable
785
 
786
@item @emph{See also}:
787
@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
788
 
789
@item @emph{Reference}:
790
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.3.
791
@end table
792
 
793
 
794
 
795
@node omp_test_lock
796
@section @code{omp_test_lock} -- Test and set simple lock if available
797
@table @asis
798
@item @emph{Description}:
799
Before setting a simple lock, the lock variable must be initialized by
800
@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
801
does not block if the lock is not available. This function returns
802
@code{true} upon success, @code{false} otherwise. Here, @code{true} and
803
@code{false} represent their language-specific counterparts.
804
 
805
@item @emph{C/C++}:
806
@multitable @columnfractions .20 .80
807
@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
808
@end multitable
809
 
810
@item @emph{Fortran}:
811
@multitable @columnfractions .20 .80
812
@item @emph{Interface}: @tab @code{logical function omp_test_lock(lock)}
813
@item                   @tab @code{integer(omp_lock_kind), intent(inout) :: lock}
814
@end multitable
815
 
816
@item @emph{See also}:
817
@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
818
 
819
@item @emph{Reference}:
820
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.5.
821
@end table
822
 
823
 
824
 
825
@node omp_unset_lock
826
@section @code{omp_unset_lock} -- Unset simple lock
827
@table @asis
828
@item @emph{Description}:
829
A simple lock about to be unset must have been locked by @code{omp_set_lock}
830
or @code{omp_test_lock} before. In addition, the lock must be held by the
831
thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
832
or more threads attempted to set the lock before, one of them is chosen to,
833
again, set the lock to itself.
834
 
835
@item @emph{C/C++}:
836
@multitable @columnfractions .20 .80
837
@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
838
@end multitable
839
 
840
@item @emph{Fortran}:
841
@multitable @columnfractions .20 .80
842
@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(lock)}
843
@item                   @tab @code{integer(omp_lock_kind), intent(inout) :: lock}
844
@end multitable
845
 
846
@item @emph{See also}:
847
@ref{omp_set_lock}, @ref{omp_test_lock}
848
 
849
@item @emph{Reference}:
850
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.4.
851
@end table
852
 
853
 
854
 
855
@node omp_destroy_lock
856
@section @code{omp_destroy_lock} -- Destroy simple lock
857
@table @asis
858
@item @emph{Description}:
859
Destroy a simple lock. In order to be destroyed, a simple lock must be
860
in the unlocked state.
861
 
862
@item @emph{C/C++}:
863
@multitable @columnfractions .20 .80
864
@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
865
@end multitable
866
 
867
@item @emph{Fortran}:
868
@multitable @columnfractions .20 .80
869
@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(lock)}
870
@item                   @tab @code{integer(omp_lock_kind), intent(inout) :: lock}
871
@end multitable
872
 
873
@item @emph{See also}:
874
@ref{omp_init_lock}
875
 
876
@item @emph{Reference}:
877
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.2.
878
@end table
879
 
880
 
881
 
882
@node omp_init_nest_lock
883
@section @code{omp_init_nest_lock} -- Initialize nested lock
884
@table @asis
885
@item @emph{Description}:
886
Initialize a nested lock.  After initialization, the lock is in
887
an unlocked state and the nesting count is set to zero.
888
 
889
@item @emph{C/C++}:
890
@multitable @columnfractions .20 .80
891
@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
892
@end multitable
893
 
894
@item @emph{Fortran}:
895
@multitable @columnfractions .20 .80
896
@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(lock)}
897
@item                   @tab @code{integer(omp_nest_lock_kind), intent(out) :: lock}
898
@end multitable
899
 
900
@item @emph{See also}:
901
@ref{omp_destroy_nest_lock}
902
 
903
@item @emph{Reference}:
904
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.1.
905
@end table
906
 
907
 
908
@node omp_set_nest_lock
909
@section @code{omp_set_nest_lock} -- Wait for and set nested lock
910
@table @asis
911
@item @emph{Description}:
912
Before setting a nested lock, the lock variable must be initialized by
913
@code{omp_init_nest_lock}. The calling thread is blocked until the lock
914
is available. If the lock is already held by the current thread, the
915
nesting count for the lock is incremented.
916
 
917
@item @emph{C/C++}:
918
@multitable @columnfractions .20 .80
919
@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
920
@end multitable
921
 
922
@item @emph{Fortran}:
923
@multitable @columnfractions .20 .80
924
@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(lock)}
925
@item                   @tab @code{integer(omp_nest_lock_kind), intent(inout) :: lock}
926
@end multitable
927
 
928
@item @emph{See also}:
929
@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
930
 
931
@item @emph{Reference}:
932
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.3.
933
@end table
934
 
935
 
936
 
937
@node omp_test_nest_lock
938
@section @code{omp_test_nest_lock} -- Test and set nested lock if available
939
@table @asis
940
@item @emph{Description}:
941
Before setting a nested lock, the lock variable must be initialized by
942
@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
943
@code{omp_test_nest_lock} does not block if the lock is not available.
944
If the lock is already held by the current thread, the new nesting count
945
is returned. Otherwise, the return value equals zero.
946
 
947
@item @emph{C/C++}:
948
@multitable @columnfractions .20 .80
949
@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
950
@end multitable
951
 
952
@item @emph{Fortran}:
953
@multitable @columnfractions .20 .80
954
@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(lock)}
955
@item                   @tab @code{integer(omp_nest_lock_kind), intent(inout) :: lock}
956
@end multitable
957
 
958
 
959
@item @emph{See also}:
960
@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
961
 
962
@item @emph{Reference}:
963
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.5.
964
@end table
965
 
966
 
967
 
968
@node omp_unset_nest_lock
969
@section @code{omp_unset_nest_lock} -- Unset nested lock
970
@table @asis
971
@item @emph{Description}:
972
A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
973
or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
974
thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
975
lock becomes unlocked. If one ore more threads attempted to set the lock before,
976
one of them is chosen to, again, set the lock to itself.
977
 
978
@item @emph{C/C++}:
979
@multitable @columnfractions .20 .80
980
@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
981
@end multitable
982
 
983
@item @emph{Fortran}:
984
@multitable @columnfractions .20 .80
985
@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(lock)}
986
@item                   @tab @code{integer(omp_nest_lock_kind), intent(inout) :: lock}
987
@end multitable
988
 
989
@item @emph{See also}:
990
@ref{omp_set_nest_lock}
991
 
992
@item @emph{Reference}:
993
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.4.
994
@end table
995
 
996
 
997
 
998
@node omp_destroy_nest_lock
999
@section @code{omp_destroy_nest_lock} -- Destroy nested lock
1000
@table @asis
1001
@item @emph{Description}:
1002
Destroy a nested lock. In order to be destroyed, a nested lock must be
1003
in the unlocked state and its nesting count must equal zero.
1004
 
1005
@item @emph{C/C++}:
1006
@multitable @columnfractions .20 .80
1007
@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1008
@end multitable
1009
 
1010
@item @emph{Fortran}:
1011
@multitable @columnfractions .20 .80
1012
@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(lock)}
1013
@item                   @tab @code{integer(omp_nest_lock_kind), intent(inout) :: lock}
1014
@end multitable
1015
 
1016
@item @emph{See also}:
1017
@ref{omp_init_lock}
1018
 
1019
@item @emph{Reference}:
1020
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.2.
1021
@end table
1022
 
1023
 
1024
 
1025
@node omp_get_wtick
1026
@section @code{omp_get_wtick} -- Get timer precision
1027
@table @asis
1028
@item @emph{Description}:
1029
Gets the timer precision, i.e., the number of seconds between two
1030
successive clock ticks.
1031
 
1032
@item @emph{C/C++}:
1033
@multitable @columnfractions .20 .80
1034
@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1035
@end multitable
1036
 
1037
@item @emph{Fortran}:
1038
@multitable @columnfractions .20 .80
1039
@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1040
@end multitable
1041
 
1042
@item @emph{See also}:
1043
@ref{omp_get_wtime}
1044
 
1045
@item @emph{Reference}:
1046
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.4.2.
1047
@end table
1048
 
1049
 
1050
 
1051
@node omp_get_wtime
1052
@section @code{omp_get_wtime} -- Elapsed wall clock time
1053
@table @asis
1054
@item @emph{Description}:
1055
Elapsed wall clock time in seconds. The time is measured per thread, no
1056
guarantee can be made that two distinct threads measure the same time.
1057
Time is measured from some "time in the past", which is an arbitrary time
1058
guaranteed not to change during the execution of the program.
1059
 
1060
@item @emph{C/C++}:
1061
@multitable @columnfractions .20 .80
1062
@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1063
@end multitable
1064
 
1065
@item @emph{Fortran}:
1066
@multitable @columnfractions .20 .80
1067
@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1068
@end multitable
1069
 
1070
@item @emph{See also}:
1071
@ref{omp_get_wtick}
1072
 
1073
@item @emph{Reference}:
1074
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.4.1.
1075
@end table
1076
 
1077
 
1078
 
1079
@c ---------------------------------------------------------------------
1080
@c Environment Variables
1081
@c ---------------------------------------------------------------------
1082
 
1083
@node Environment Variables
1084
@chapter Environment Variables
1085
 
1086
The variables @env{OMP_DYNAMIC}, @env{OMP_MAX_ACTIVE_LEVELS},
1087
@env{OMP_NESTED}, @env{OMP_NUM_THREADS}, @env{OMP_SCHEDULE},
1088
@env{OMP_STACKSIZE},@env{OMP_THREAD_LIMIT} and @env{OMP_WAIT_POLICY}
1089
are defined by section 4 of the OpenMP specifications in version 3.1,
1090
while @env{GOMP_CPU_AFFINITY} and @env{GOMP_STACKSIZE} are GNU
1091
extensions.
1092
 
1093
@menu
1094
* OMP_DYNAMIC::           Dynamic adjustment of threads
1095
* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1096
* OMP_NESTED::            Nested parallel regions
1097
* OMP_NUM_THREADS::       Specifies the number of threads to use
1098
* OMP_STACKSIZE::         Set default thread stack size
1099
* OMP_SCHEDULE::          How threads are scheduled
1100
* OMP_THREAD_LIMIT::      Set the maximum number of threads
1101
* OMP_WAIT_POLICY::       How waiting threads are handled
1102
* OMP_PROC_BIND::         Whether theads may be moved between CPUs
1103
* GOMP_CPU_AFFINITY::     Bind threads to specific CPUs
1104
* GOMP_STACKSIZE::        Set default thread stack size
1105
@end menu
1106
 
1107
 
1108
@node OMP_DYNAMIC
1109
@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
1110
@cindex Environment Variable
1111
@table @asis
1112
@item @emph{Description}:
1113
Enable or disable the dynamic adjustment of the number of threads
1114
within a team. The value of this environment variable shall be
1115
@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
1116
disabled by default.
1117
 
1118
@item @emph{See also}:
1119
@ref{omp_set_dynamic}
1120
 
1121
@item @emph{Reference}:
1122
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.3
1123
@end table
1124
 
1125
 
1126
 
1127
@node OMP_MAX_ACTIVE_LEVELS
1128
@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
1129
@cindex Environment Variable
1130
@table @asis
1131
@item @emph{Description}:
1132
Specifies the initial value for the maximum number of nested parallel
1133
regions. The value of this variable shall be a positive integer.
1134
If undefined, the number of active levels is unlimited.
1135
 
1136
@item @emph{See also}:
1137
@ref{omp_set_max_active_levels}
1138
 
1139
@item @emph{Reference}:
1140
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.8
1141
@end table
1142
 
1143
 
1144
 
1145
@node OMP_NESTED
1146
@section @env{OMP_NESTED} -- Nested parallel regions
1147
@cindex Environment Variable
1148
@cindex Implementation specific setting
1149
@table @asis
1150
@item @emph{Description}:
1151
Enable or disable nested parallel regions, i.e., whether team members
1152
are allowed to create new teams. The value of this environment variable
1153
shall be @code{TRUE} or @code{FALSE}. If undefined, nested parallel
1154
regions are disabled by default.
1155
 
1156
@item @emph{See also}:
1157
@ref{omp_set_nested}
1158
 
1159
@item @emph{Reference}:
1160
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.5
1161
@end table
1162
 
1163
 
1164
 
1165
@node OMP_NUM_THREADS
1166
@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
1167
@cindex Environment Variable
1168
@cindex Implementation specific setting
1169
@table @asis
1170
@item @emph{Description}:
1171
Specifies the default number of threads to use in parallel regions. The
1172
value of this variable shall be a comma-separated list of positive integers;
1173
the value specified the number of threads to use for the corresponding nested
1174
level. If undefined one thread per CPU is used.
1175
 
1176
@item @emph{See also}:
1177
@ref{omp_set_num_threads}
1178
 
1179
@item @emph{Reference}:
1180
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.2
1181
@end table
1182
 
1183
 
1184
 
1185
@node OMP_SCHEDULE
1186
@section @env{OMP_SCHEDULE} -- How threads are scheduled
1187
@cindex Environment Variable
1188
@cindex Implementation specific setting
1189
@table @asis
1190
@item @emph{Description}:
1191
Allows to specify @code{schedule type} and @code{chunk size}.
1192
The value of the variable shall have the form: @code{type[,chunk]} where
1193
@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
1194
The optional @code{chunk} size shall be a positive integer. If undefined,
1195
dynamic scheduling and a chunk size of 1 is used.
1196
 
1197
@item @emph{See also}:
1198
@ref{omp_set_schedule}
1199
 
1200
@item @emph{Reference}:
1201
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, sections 2.5.1 and 4.1
1202
@end table
1203
 
1204
 
1205
 
1206
@node OMP_STACKSIZE
1207
@section @env{OMP_STACKSIZE} -- Set default thread stack size
1208
@cindex Environment Variable
1209
@table @asis
1210
@item @emph{Description}:
1211
Set the default thread stack size in kilobytes, unless the number
1212
is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
1213
case the size is, respectively, in bytes, kilobytes, megabytes
1214
or gigabytes. This is different from @code{pthread_attr_setstacksize}
1215
which gets the number of bytes as an argument. If the stack size cannot
1216
be set due to system constraints, an error is reported and the initial
1217
stack size is left unchanged. If undefined, the stack size is system
1218
dependent.
1219
 
1220
@item @emph{Reference}:
1221
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, sections 4.6
1222
@end table
1223
 
1224
 
1225
 
1226
@node OMP_THREAD_LIMIT
1227
@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
1228
@cindex Environment Variable
1229
@table @asis
1230
@item @emph{Description}:
1231
Specifies the number of threads to use for the whole program. The
1232
value of this variable shall be a positive integer. If undefined,
1233
the number of threads is not limited.
1234
 
1235
@item @emph{See also}:
1236
@ref{OMP_NUM_THREADS}
1237
@ref{omp_get_thread_limit}
1238
 
1239
@item @emph{Reference}:
1240
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.9
1241
@end table
1242
 
1243
 
1244
 
1245
@node OMP_WAIT_POLICY
1246
@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
1247
@cindex Environment Variable
1248
@table @asis
1249
@item @emph{Description}:
1250
Specifies whether waiting threads should be active or passive. If
1251
the value is @code{PASSIVE}, waiting threads should not consume CPU
1252
power while waiting; while the value is @code{ACTIVE} specifies that
1253
they should.
1254
 
1255
@item @emph{Reference}:
1256
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, sections 4.7
1257
@end table
1258
 
1259
 
1260
 
1261
@node OMP_PROC_BIND
1262
@section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
1263
@cindex Environment Variable
1264
@table @asis
1265
@item @emph{Description}:
1266
Specifies whether threads may be moved between processors. If set to
1267
@code{true}, OpenMP theads should not be moved, if set to @code{false}
1268
they may be moved.
1269
 
1270
@item @emph{See also}:
1271
@ref{GOMP_CPU_AFFINITY}
1272
 
1273
@item @emph{Reference}:
1274
@uref{http://www.openmp.org/, OpenMP specifications v3.1}, sections 4.4
1275
@end table
1276
 
1277
 
1278
 
1279
@node GOMP_CPU_AFFINITY
1280
@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
1281
@cindex Environment Variable
1282
@table @asis
1283
@item @emph{Description}:
1284
Binds threads to specific CPUs. The variable should contain a space-separated
1285
or comma-separated list of CPUs. This list may contain different kinds of
1286
entries: either single CPU numbers in any order, a range of CPUs (M-N)
1287
or a range with some stride (M-N:S).  CPU numbers are zero based. For example,
1288
@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
1289
to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
1290
CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
1291
and 14 respectively and then start assigning back from the beginning of
1292
the list.  @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
1293
 
1294
There is no GNU OpenMP library routine to determine whether a CPU affinity
1295
specification is in effect. As a workaround, language-specific library
1296
functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
1297
Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
1298
environment variable. A defined CPU affinity on startup cannot be changed
1299
or disabled during the runtime of the application.
1300
 
1301
If this environment variable is omitted, the host system will handle the
1302
assignment of threads to CPUs.
1303
 
1304
@item @emph{See also}:
1305
@ref{OMP_PROC_BIND}
1306
@end table
1307
 
1308
 
1309
 
1310
@node GOMP_STACKSIZE
1311
@section @env{GOMP_STACKSIZE} -- Set default thread stack size
1312
@cindex Environment Variable
1313
@cindex Implementation specific setting
1314
@table @asis
1315
@item @emph{Description}:
1316
Set the default thread stack size in kilobytes. This is different from
1317
@code{pthread_attr_setstacksize} which gets the number of bytes as an
1318
argument. If the stack size cannot be set due to system constraints, an
1319
error is reported and the initial stack size is left unchanged. If undefined,
1320
the stack size is system dependent.
1321
 
1322
@item @emph{See also}:
1323
@ref{OMP_STACKSIZE}
1324
 
1325
@item @emph{Reference}:
1326
@uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
1327
GCC Patches Mailinglist},
1328
@uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
1329
GCC Patches Mailinglist}
1330
@end table
1331
 
1332
 
1333
 
1334
@c ---------------------------------------------------------------------
1335
@c The libgomp ABI
1336
@c ---------------------------------------------------------------------
1337
 
1338
@node The libgomp ABI
1339
@chapter The libgomp ABI
1340
 
1341
The following sections present notes on the external ABI as
1342
presented by libgomp.  Only maintainers should need them.
1343
 
1344
@menu
1345
* Implementing MASTER construct::
1346
* Implementing CRITICAL construct::
1347
* Implementing ATOMIC construct::
1348
* Implementing FLUSH construct::
1349
* Implementing BARRIER construct::
1350
* Implementing THREADPRIVATE construct::
1351
* Implementing PRIVATE clause::
1352
* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
1353
* Implementing REDUCTION clause::
1354
* Implementing PARALLEL construct::
1355
* Implementing FOR construct::
1356
* Implementing ORDERED construct::
1357
* Implementing SECTIONS construct::
1358
* Implementing SINGLE construct::
1359
@end menu
1360
 
1361
 
1362
@node Implementing MASTER construct
1363
@section Implementing MASTER construct
1364
 
1365
@smallexample
1366
if (omp_get_thread_num () == 0)
1367
  block
1368
@end smallexample
1369
 
1370
Alternately, we generate two copies of the parallel subfunction
1371
and only include this in the version run by the master thread.
1372
Surely this is not worthwhile though...
1373
 
1374
 
1375
 
1376
@node Implementing CRITICAL construct
1377
@section Implementing CRITICAL construct
1378
 
1379
Without a specified name,
1380
 
1381
@smallexample
1382
  void GOMP_critical_start (void);
1383
  void GOMP_critical_end (void);
1384
@end smallexample
1385
 
1386
so that we don't get COPY relocations from libgomp to the main
1387
application.
1388
 
1389
With a specified name, use omp_set_lock and omp_unset_lock with
1390
name being transformed into a variable declared like
1391
 
1392
@smallexample
1393
  omp_lock_t gomp_critical_user_<name> __attribute__((common))
1394
@end smallexample
1395
 
1396
Ideally the ABI would specify that all zero is a valid unlocked
1397
state, and so we wouldn't need to initialize this at
1398
startup.
1399
 
1400
 
1401
 
1402
@node Implementing ATOMIC construct
1403
@section Implementing ATOMIC construct
1404
 
1405
The target should implement the @code{__sync} builtins.
1406
 
1407
Failing that we could add
1408
 
1409
@smallexample
1410
  void GOMP_atomic_enter (void)
1411
  void GOMP_atomic_exit (void)
1412
@end smallexample
1413
 
1414
which reuses the regular lock code, but with yet another lock
1415
object private to the library.
1416
 
1417
 
1418
 
1419
@node Implementing FLUSH construct
1420
@section Implementing FLUSH construct
1421
 
1422
Expands to the @code{__sync_synchronize} builtin.
1423
 
1424
 
1425
 
1426
@node Implementing BARRIER construct
1427
@section Implementing BARRIER construct
1428
 
1429
@smallexample
1430
  void GOMP_barrier (void)
1431
@end smallexample
1432
 
1433
 
1434
@node Implementing THREADPRIVATE construct
1435
@section Implementing THREADPRIVATE construct
1436
 
1437
In _most_ cases we can map this directly to @code{__thread}.  Except
1438
that OMP allows constructors for C++ objects.  We can either
1439
refuse to support this (how often is it used?) or we can
1440
implement something akin to .ctors.
1441
 
1442
Even more ideally, this ctor feature is handled by extensions
1443
to the main pthreads library.  Failing that, we can have a set
1444
of entry points to register ctor functions to be called.
1445
 
1446
 
1447
 
1448
@node Implementing PRIVATE clause
1449
@section Implementing PRIVATE clause
1450
 
1451
In association with a PARALLEL, or within the lexical extent
1452
of a PARALLEL block, the variable becomes a local variable in
1453
the parallel subfunction.
1454
 
1455
In association with FOR or SECTIONS blocks, create a new
1456
automatic variable within the current function.  This preserves
1457
the semantic of new variable creation.
1458
 
1459
 
1460
 
1461
@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
1462
@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
1463
 
1464
This seems simple enough for PARALLEL blocks.  Create a private
1465
struct for communicating between the parent and subfunction.
1466
In the parent, copy in values for scalar and "small" structs;
1467
copy in addresses for others TREE_ADDRESSABLE types.  In the
1468
subfunction, copy the value into the local variable.
1469
 
1470
It is not clear what to do with bare FOR or SECTION blocks.
1471
The only thing I can figure is that we do something like:
1472
 
1473
@smallexample
1474
#pragma omp for firstprivate(x) lastprivate(y)
1475
for (int i = 0; i < n; ++i)
1476
  body;
1477
@end smallexample
1478
 
1479
which becomes
1480
 
1481
@smallexample
1482
@{
1483
  int x = x, y;
1484
 
1485
  // for stuff
1486
 
1487
  if (i == n)
1488
    y = y;
1489
@}
1490
@end smallexample
1491
 
1492
where the "x=x" and "y=y" assignments actually have different
1493
uids for the two variables, i.e. not something you could write
1494
directly in C.  Presumably this only makes sense if the "outer"
1495
x and y are global variables.
1496
 
1497
COPYPRIVATE would work the same way, except the structure
1498
broadcast would have to happen via SINGLE machinery instead.
1499
 
1500
 
1501
 
1502
@node Implementing REDUCTION clause
1503
@section Implementing REDUCTION clause
1504
 
1505
The private struct mentioned in the previous section should have
1506
a pointer to an array of the type of the variable, indexed by the
1507
thread's @var{team_id}.  The thread stores its final value into the
1508
array, and after the barrier, the master thread iterates over the
1509
array to collect the values.
1510
 
1511
 
1512
@node Implementing PARALLEL construct
1513
@section Implementing PARALLEL construct
1514
 
1515
@smallexample
1516
  #pragma omp parallel
1517
  @{
1518
    body;
1519
  @}
1520
@end smallexample
1521
 
1522
becomes
1523
 
1524
@smallexample
1525
  void subfunction (void *data)
1526
  @{
1527
    use data;
1528
    body;
1529
  @}
1530
 
1531
  setup data;
1532
  GOMP_parallel_start (subfunction, &data, num_threads);
1533
  subfunction (&data);
1534
  GOMP_parallel_end ();
1535
@end smallexample
1536
 
1537
@smallexample
1538
  void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
1539
@end smallexample
1540
 
1541
The @var{FN} argument is the subfunction to be run in parallel.
1542
 
1543
The @var{DATA} argument is a pointer to a structure used to
1544
communicate data in and out of the subfunction, as discussed
1545
above with respect to FIRSTPRIVATE et al.
1546
 
1547
The @var{NUM_THREADS} argument is 1 if an IF clause is present
1548
and false, or the value of the NUM_THREADS clause, if
1549
present, or 0.
1550
 
1551
The function needs to create the appropriate number of
1552
threads and/or launch them from the dock.  It needs to
1553
create the team structure and assign team ids.
1554
 
1555
@smallexample
1556
  void GOMP_parallel_end (void)
1557
@end smallexample
1558
 
1559
Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
1560
 
1561
 
1562
 
1563
@node Implementing FOR construct
1564
@section Implementing FOR construct
1565
 
1566
@smallexample
1567
  #pragma omp parallel for
1568
  for (i = lb; i <= ub; i++)
1569
    body;
1570
@end smallexample
1571
 
1572
becomes
1573
 
1574
@smallexample
1575
  void subfunction (void *data)
1576
  @{
1577
    long _s0, _e0;
1578
    while (GOMP_loop_static_next (&_s0, &_e0))
1579
    @{
1580
      long _e1 = _e0, i;
1581
      for (i = _s0; i < _e1; i++)
1582
        body;
1583
    @}
1584
    GOMP_loop_end_nowait ();
1585
  @}
1586
 
1587
  GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
1588
  subfunction (NULL);
1589
  GOMP_parallel_end ();
1590
@end smallexample
1591
 
1592
@smallexample
1593
  #pragma omp for schedule(runtime)
1594
  for (i = 0; i < n; i++)
1595
    body;
1596
@end smallexample
1597
 
1598
becomes
1599
 
1600
@smallexample
1601
  @{
1602
    long i, _s0, _e0;
1603
    if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
1604
      do @{
1605
        long _e1 = _e0;
1606
        for (i = _s0, i < _e0; i++)
1607
          body;
1608
      @} while (GOMP_loop_runtime_next (&_s0, _&e0));
1609
    GOMP_loop_end ();
1610
  @}
1611
@end smallexample
1612
 
1613
Note that while it looks like there is trickiness to propagating
1614
a non-constant STEP, there isn't really.  We're explicitly allowed
1615
to evaluate it as many times as we want, and any variables involved
1616
should automatically be handled as PRIVATE or SHARED like any other
1617
variables.  So the expression should remain evaluable in the
1618
subfunction.  We can also pull it into a local variable if we like,
1619
but since its supposed to remain unchanged, we can also not if we like.
1620
 
1621
If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
1622
able to get away with no work-sharing context at all, since we can
1623
simply perform the arithmetic directly in each thread to divide up
1624
the iterations.  Which would mean that we wouldn't need to call any
1625
of these routines.
1626
 
1627
There are separate routines for handling loops with an ORDERED
1628
clause.  Bookkeeping for that is non-trivial...
1629
 
1630
 
1631
 
1632
@node Implementing ORDERED construct
1633
@section Implementing ORDERED construct
1634
 
1635
@smallexample
1636
  void GOMP_ordered_start (void)
1637
  void GOMP_ordered_end (void)
1638
@end smallexample
1639
 
1640
 
1641
 
1642
@node Implementing SECTIONS construct
1643
@section Implementing SECTIONS construct
1644
 
1645
A block as
1646
 
1647
@smallexample
1648
  #pragma omp sections
1649
  @{
1650
    #pragma omp section
1651
    stmt1;
1652
    #pragma omp section
1653
    stmt2;
1654
    #pragma omp section
1655
    stmt3;
1656
  @}
1657
@end smallexample
1658
 
1659
becomes
1660
 
1661
@smallexample
1662
  for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
1663
    switch (i)
1664
      @{
1665
      case 1:
1666
        stmt1;
1667
        break;
1668
      case 2:
1669
        stmt2;
1670
        break;
1671
      case 3:
1672
        stmt3;
1673
        break;
1674
      @}
1675
  GOMP_barrier ();
1676
@end smallexample
1677
 
1678
 
1679
@node Implementing SINGLE construct
1680
@section Implementing SINGLE construct
1681
 
1682
A block like
1683
 
1684
@smallexample
1685
  #pragma omp single
1686
  @{
1687
    body;
1688
  @}
1689
@end smallexample
1690
 
1691
becomes
1692
 
1693
@smallexample
1694
  if (GOMP_single_start ())
1695
    body;
1696
  GOMP_barrier ();
1697
@end smallexample
1698
 
1699
while
1700
 
1701
@smallexample
1702
  #pragma omp single copyprivate(x)
1703
    body;
1704
@end smallexample
1705
 
1706
becomes
1707
 
1708
@smallexample
1709
  datap = GOMP_single_copy_start ();
1710
  if (datap == NULL)
1711
    @{
1712
      body;
1713
      data.x = x;
1714
      GOMP_single_copy_end (&data);
1715
    @}
1716
  else
1717
    x = datap->x;
1718
  GOMP_barrier ();
1719
@end smallexample
1720
 
1721
 
1722
 
1723
@c ---------------------------------------------------------------------
1724
@c
1725
@c ---------------------------------------------------------------------
1726
 
1727
@node Reporting Bugs
1728
@chapter Reporting Bugs
1729
 
1730
Bugs in the GNU OpenMP implementation should be reported via
1731
@uref{http://gcc.gnu.org/bugzilla/, bugzilla}.  For all cases, please add
1732
"openmp" to the keywords field in the bug report.
1733
 
1734
 
1735
 
1736
@c ---------------------------------------------------------------------
1737
@c GNU General Public License
1738
@c ---------------------------------------------------------------------
1739
 
1740
@include gpl.texi
1741
 
1742
 
1743
 
1744
@c ---------------------------------------------------------------------
1745
@c GNU Free Documentation License
1746
@c ---------------------------------------------------------------------
1747
 
1748
@include fdl.texi
1749
 
1750
 
1751
 
1752
@c ---------------------------------------------------------------------
1753
@c Funding Free Software
1754
@c ---------------------------------------------------------------------
1755
 
1756
@include funding.texi
1757
 
1758
@c ---------------------------------------------------------------------
1759
@c Index
1760
@c ---------------------------------------------------------------------
1761
 
1762
@node Index
1763
@unnumbered Index
1764
 
1765
@printindex cp
1766
 
1767
@bye

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.