OpenCores
URL https://opencores.org/ocsvn/s80186/s80186/trunk

Subversion Repositories s80186

[/] [s80186/] [trunk/] [vendor/] [googletest/] [googletest/] [docs/] [V1_5_AdvancedGuide.md] - Blame information for rev 2

Details | Compare with Previous | View Log

Line No. Rev Author Line
1 2 jamieiles
 
2
 
3
Now that you have read [Primer](V1_5_Primer.md) and learned how to write tests
4
using Google Test, it's time to learn some new tricks. This document
5
will show you more assertions as well as how to construct complex
6
failure messages, propagate fatal failures, reuse and speed up your
7
test fixtures, and use various flags with your tests.
8
 
9
# More Assertions #
10
 
11
This section covers some less frequently used, but still significant,
12
assertions.
13
 
14
## Explicit Success and Failure ##
15
 
16
These three assertions do not actually test a value or expression. Instead,
17
they generate a success or failure directly. Like the macros that actually
18
perform a test, you may stream a custom failure message into the them.
19
 
20
| `SUCCEED();` |
21
|:-------------|
22
 
23
Generates a success. This does NOT make the overall test succeed. A test is
24
considered successful only if none of its assertions fail during its execution.
25
 
26
Note: `SUCCEED()` is purely documentary and currently doesn't generate any
27
user-visible output. However, we may add `SUCCEED()` messages to Google Test's
28
output in the future.
29
 
30
| `FAIL();`  | `ADD_FAILURE();` |
31
|:-----------|:-----------------|
32
 
33
`FAIL*` generates a fatal failure while `ADD_FAILURE*` generates a nonfatal
34
failure. These are useful when control flow, rather than a Boolean expression,
35
deteremines the test's success or failure. For example, you might want to write
36
something like:
37
 
38
```
39
switch(expression) {
40
  case 1: ... some checks ...
41
  case 2: ... some other checks
42
  ...
43
  default: FAIL() << "We shouldn't get here.";
44
}
45
```
46
 
47
_Availability_: Linux, Windows, Mac.
48
 
49
## Exception Assertions ##
50
 
51
These are for verifying that a piece of code throws (or does not
52
throw) an exception of the given type:
53
 
54
| **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
55
|:--------------------|:-----------------------|:-------------|
56
| `ASSERT_THROW(`_statement_, _exception\_type_`);`  | `EXPECT_THROW(`_statement_, _exception\_type_`);`  | _statement_ throws an exception of the given type  |
57
| `ASSERT_ANY_THROW(`_statement_`);`                | `EXPECT_ANY_THROW(`_statement_`);`                | _statement_ throws an exception of any type        |
58
| `ASSERT_NO_THROW(`_statement_`);`                 | `EXPECT_NO_THROW(`_statement_`);`                 | _statement_ doesn't throw any exception            |
59
 
60
Examples:
61
 
62
```
63
ASSERT_THROW(Foo(5), bar_exception);
64
 
65
EXPECT_NO_THROW({
66
  int n = 5;
67
  Bar(&n);
68
});
69
```
70
 
71
_Availability_: Linux, Windows, Mac; since version 1.1.0.
72
 
73
## Predicate Assertions for Better Error Messages ##
74
 
75
Even though Google Test has a rich set of assertions, they can never be
76
complete, as it's impossible (nor a good idea) to anticipate all the scenarios
77
a user might run into. Therefore, sometimes a user has to use `EXPECT_TRUE()`
78
to check a complex expression, for lack of a better macro. This has the problem
79
of not showing you the values of the parts of the expression, making it hard to
80
understand what went wrong. As a workaround, some users choose to construct the
81
failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
82
is awkward especially when the expression has side-effects or is expensive to
83
evaluate.
84
 
85
Google Test gives you three different options to solve this problem:
86
 
87
### Using an Existing Boolean Function ###
88
 
89
If you already have a function or a functor that returns `bool` (or a type
90
that can be implicitly converted to `bool`), you can use it in a _predicate
91
assertion_ to get the function arguments printed for free:
92
 
93
| **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
94
|:--------------------|:-----------------------|:-------------|
95
| `ASSERT_PRED1(`_pred1, val1_`);`       | `EXPECT_PRED1(`_pred1, val1_`);` | _pred1(val1)_ returns true |
96
| `ASSERT_PRED2(`_pred2, val1, val2_`);` | `EXPECT_PRED2(`_pred2, val1, val2_`);` |  _pred2(val1, val2)_ returns true |
97
|  ...                | ...                    | ...          |
98
 
99
In the above, _predn_ is an _n_-ary predicate function or functor, where
100
_val1_, _val2_, ..., and _valn_ are its arguments. The assertion succeeds
101
if the predicate returns `true` when applied to the given arguments, and fails
102
otherwise. When the assertion fails, it prints the value of each argument. In
103
either case, the arguments are evaluated exactly once.
104
 
105
Here's an example. Given
106
 
107
```
108
// Returns true iff m and n have no common divisors except 1.
109
bool MutuallyPrime(int m, int n) { ... }
110
const int a = 3;
111
const int b = 4;
112
const int c = 10;
113
```
114
 
115
the assertion `EXPECT_PRED2(MutuallyPrime, a, b);` will succeed, while the
116
assertion `EXPECT_PRED2(MutuallyPrime, b, c);` will fail with the message
117
 
118
119
!MutuallyPrime(b, c) is false, where
120
b is 4
121
c is 10
122
123
 
124
**Notes:**
125
 
126
  1. If you see a compiler error "no matching function to call" when using `ASSERT_PRED*` or `EXPECT_PRED*`, please see [this](V1_5_FAQ.md#the-compiler-complains-about-undefined-references-to-some-static-const-member-variables-but-i-did-define-them-in-the-class-body-whats-wrong) for how to resolve it.
127
  1. Currently we only provide predicate assertions of arity <= 5. If you need a higher-arity assertion, let us know.
128
 
129
_Availability_: Linux, Windows, Mac
130
 
131
### Using a Function That Returns an AssertionResult ###
132
 
133
While `EXPECT_PRED*()` and friends are handy for a quick job, the
134
syntax is not satisfactory: you have to use different macros for
135
different arities, and it feels more like Lisp than C++.  The
136
`::testing::AssertionResult` class solves this problem.
137
 
138
An `AssertionResult` object represents the result of an assertion
139
(whether it's a success or a failure, and an associated message).  You
140
can create an `AssertionResult` using one of these factory
141
functions:
142
 
143
```
144
namespace testing {
145
 
146
// Returns an AssertionResult object to indicate that an assertion has
147
// succeeded.
148
AssertionResult AssertionSuccess();
149
 
150
// Returns an AssertionResult object to indicate that an assertion has
151
// failed.
152
AssertionResult AssertionFailure();
153
 
154
}
155
```
156
 
157
You can then use the `<<` operator to stream messages to the
158
`AssertionResult` object.
159
 
160
To provide more readable messages in Boolean assertions
161
(e.g. `EXPECT_TRUE()`), write a predicate function that returns
162
`AssertionResult` instead of `bool`. For example, if you define
163
`IsEven()` as:
164
 
165
```
166
::testing::AssertionResult IsEven(int n) {
167
  if ((n % 2) == 0)
168
    return ::testing::AssertionSuccess();
169
  else
170
    return ::testing::AssertionFailure() << n << " is odd";
171
}
172
```
173
 
174
instead of:
175
 
176
```
177
bool IsEven(int n) {
178
  return (n % 2) == 0;
179
}
180
```
181
 
182
the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
183
 
184
185
Value of: !IsEven(Fib(4))
186
Actual: false (*3 is odd*)
187
Expected: true
188
189
 
190
instead of a more opaque
191
 
192
193
Value of: !IsEven(Fib(4))
194
Actual: false
195
Expected: true
196
197
 
198
If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE`
199
as well, and are fine with making the predicate slower in the success
200
case, you can supply a success message:
201
 
202
```
203
::testing::AssertionResult IsEven(int n) {
204
  if ((n % 2) == 0)
205
    return ::testing::AssertionSuccess() << n << " is even";
206
  else
207
    return ::testing::AssertionFailure() << n << " is odd";
208
}
209
```
210
 
211
Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
212
 
213
214
Value of: !IsEven(Fib(6))
215
Actual: true (8 is even)
216
Expected: false
217
218
 
219
_Availability_: Linux, Windows, Mac; since version 1.4.1.
220
 
221
### Using a Predicate-Formatter ###
222
 
223
If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and
224
`(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your
225
predicate do not support streaming to `ostream`, you can instead use the
226
following _predicate-formatter assertions_ to _fully_ customize how the
227
message is formatted:
228
 
229
| **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
230
|:--------------------|:-----------------------|:-------------|
231
| `ASSERT_PRED_FORMAT1(`_pred\_format1, val1_`);`        | `EXPECT_PRED_FORMAT1(`_pred\_format1, val1_`); | _pred\_format1(val1)_ is successful |
232
| `ASSERT_PRED_FORMAT2(`_pred\_format2, val1, val2_`);` | `EXPECT_PRED_FORMAT2(`_pred\_format2, val1, val2_`);` | _pred\_format2(val1, val2)_ is successful |
233
| `...`               | `...`                  | `...`        |
234
 
235
The difference between this and the previous two groups of macros is that instead of
236
a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a _predicate-formatter_
237
(_pred\_formatn_), which is a function or functor with the signature:
238
 
239
`::testing::AssertionResult PredicateFormattern(const char* `_expr1_`, const char* `_expr2_`, ... const char* `_exprn_`, T1 `_val1_`, T2 `_val2_`, ... Tn `_valn_`);`
240
 
241
where _val1_, _val2_, ..., and _valn_ are the values of the predicate
242
arguments, and _expr1_, _expr2_, ..., and _exprn_ are the corresponding
243
expressions as they appear in the source code. The types `T1`, `T2`, ..., and
244
`Tn` can be either value types or reference types. For example, if an
245
argument has type `Foo`, you can declare it as either `Foo` or `const Foo&`,
246
whichever is appropriate.
247
 
248
A predicate-formatter returns a `::testing::AssertionResult` object to indicate
249
whether the assertion has succeeded or not. The only way to create such an
250
object is to call one of these factory functions:
251
 
252
As an example, let's improve the failure message in the previous example, which uses `EXPECT_PRED2()`:
253
 
254
```
255
// Returns the smallest prime common divisor of m and n,
256
// or 1 when m and n are mutually prime.
257
int SmallestPrimeCommonDivisor(int m, int n) { ... }
258
 
259
// A predicate-formatter for asserting that two integers are mutually prime.
260
::testing::AssertionResult AssertMutuallyPrime(const char* m_expr,
261
                                               const char* n_expr,
262
                                               int m,
263
                                               int n) {
264
  if (MutuallyPrime(m, n))
265
    return ::testing::AssertionSuccess();
266
 
267
  return ::testing::AssertionFailure()
268
      << m_expr << " and " << n_expr << " (" << m << " and " << n
269
      << ") are not mutually prime, " << "as they have a common divisor "
270
      << SmallestPrimeCommonDivisor(m, n);
271
}
272
```
273
 
274
With this predicate-formatter, we can use
275
 
276
```
277
EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c);
278
```
279
 
280
to generate the message
281
 
282
283
b and c (4 and 10) are not mutually prime, as they have a common divisor 2.
284
285
 
286
As you may have realized, many of the assertions we introduced earlier are
287
special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are
288
indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`.
289
 
290
_Availability_: Linux, Windows, Mac.
291
 
292
 
293
## Floating-Point Comparison ##
294
 
295
Comparing floating-point numbers is tricky. Due to round-off errors, it is
296
very unlikely that two floating-points will match exactly. Therefore,
297
`ASSERT_EQ` 's naive comparison usually doesn't work. And since floating-points
298
can have a wide value range, no single fixed error bound works. It's better to
299
compare by a fixed relative error bound, except for values close to 0 due to
300
the loss of precision there.
301
 
302
In general, for floating-point comparison to make sense, the user needs to
303
carefully choose the error bound. If they don't want or care to, comparing in
304
terms of Units in the Last Place (ULPs) is a good default, and Google Test
305
provides assertions to do this. Full details about ULPs are quite long; if you
306
want to learn more, see
307
[this article on float comparison](http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm).
308
 
309
### Floating-Point Macros ###
310
 
311
| **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
312
|:--------------------|:-----------------------|:-------------|
313
| `ASSERT_FLOAT_EQ(`_expected, actual_`);`  | `EXPECT_FLOAT_EQ(`_expected, actual_`);` | the two `float` values are almost equal |
314
| `ASSERT_DOUBLE_EQ(`_expected, actual_`);` | `EXPECT_DOUBLE_EQ(`_expected, actual_`);` | the two `double` values are almost equal |
315
 
316
By "almost equal", we mean the two values are within 4 ULP's from each
317
other.
318
 
319
The following assertions allow you to choose the acceptable error bound:
320
 
321
| **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
322
|:--------------------|:-----------------------|:-------------|
323
| `ASSERT_NEAR(`_val1, val2, abs\_error_`);` | `EXPECT_NEAR`_(val1, val2, abs\_error_`);` | the difference between _val1_ and _val2_ doesn't exceed the given absolute error |
324
 
325
_Availability_: Linux, Windows, Mac.
326
 
327
### Floating-Point Predicate-Format Functions ###
328
 
329
Some floating-point operations are useful, but not that often used. In order
330
to avoid an explosion of new macros, we provide them as predicate-format
331
functions that can be used in predicate assertion macros (e.g.
332
`EXPECT_PRED_FORMAT2`, etc).
333
 
334
```
335
EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2);
336
EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2);
337
```
338
 
339
Verifies that _val1_ is less than, or almost equal to, _val2_. You can
340
replace `EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`.
341
 
342
_Availability_: Linux, Windows, Mac.
343
 
344
## Windows HRESULT assertions ##
345
 
346
These assertions test for `HRESULT` success or failure.
347
 
348
| **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
349
|:--------------------|:-----------------------|:-------------|
350
| `ASSERT_HRESULT_SUCCEEDED(`_expression_`);` | `EXPECT_HRESULT_SUCCEEDED(`_expression_`);` | _expression_ is a success `HRESULT` |
351
| `ASSERT_HRESULT_FAILED(`_expression_`);`    | `EXPECT_HRESULT_FAILED(`_expression_`);`    | _expression_ is a failure `HRESULT` |
352
 
353
The generated output contains the human-readable error message
354
associated with the `HRESULT` code returned by _expression_.
355
 
356
You might use them like this:
357
 
358
```
359
CComPtr shell;
360
ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application"));
361
CComVariant empty;
362
ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty));
363
```
364
 
365
_Availability_: Windows.
366
 
367
## Type Assertions ##
368
 
369
You can call the function
370
```
371
::testing::StaticAssertTypeEq();
372
```
373
to assert that types `T1` and `T2` are the same.  The function does
374
nothing if the assertion is satisfied.  If the types are different,
375
the function call will fail to compile, and the compiler error message
376
will likely (depending on the compiler) show you the actual values of
377
`T1` and `T2`.  This is mainly useful inside template code.
378
 
379
_Caveat:_ When used inside a member function of a class template or a
380
function template, `StaticAssertTypeEq()` is effective _only if_
381
the function is instantiated.  For example, given:
382
```
383
template  class Foo {
384
 public:
385
  void Bar() { ::testing::StaticAssertTypeEq(); }
386
};
387
```
388
the code:
389
```
390
void Test1() { Foo foo; }
391
```
392
will _not_ generate a compiler error, as `Foo::Bar()` is never
393
actually instantiated.  Instead, you need:
394
```
395
void Test2() { Foo foo; foo.Bar(); }
396
```
397
to cause a compiler error.
398
 
399
_Availability:_ Linux, Windows, Mac; since version 1.3.0.
400
 
401
## Assertion Placement ##
402
 
403
You can use assertions in any C++ function. In particular, it doesn't
404
have to be a method of the test fixture class. The one constraint is
405
that assertions that generate a fatal failure (`FAIL*` and `ASSERT_*`)
406
can only be used in void-returning functions. This is a consequence of
407
Google Test not using exceptions. By placing it in a non-void function
408
you'll get a confusing compile error like
409
`"error: void value not ignored as it ought to be"`.
410
 
411
If you need to use assertions in a function that returns non-void, one option
412
is to make the function return the value in an out parameter instead. For
413
example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
414
need to make sure that `*result` contains some sensible value even when the
415
function returns prematurely. As the function now returns `void`, you can use
416
any assertion inside of it.
417
 
418
If changing the function's type is not an option, you should just use
419
assertions that generate non-fatal failures, such as `ADD_FAILURE*` and
420
`EXPECT_*`.
421
 
422
_Note_: Constructors and destructors are not considered void-returning
423
functions, according to the C++ language specification, and so you may not use
424
fatal assertions in them. You'll get a compilation error if you try. A simple
425
workaround is to transfer the entire body of the constructor or destructor to a
426
private void-returning method. However, you should be aware that a fatal
427
assertion failure in a constructor does not terminate the current test, as your
428
intuition might suggest; it merely returns from the constructor early, possibly
429
leaving your object in a partially-constructed state. Likewise, a fatal
430
assertion failure in a destructor may leave your object in a
431
partially-destructed state. Use assertions carefully in these situations!
432
 
433
# Death Tests #
434
 
435
In many applications, there are assertions that can cause application failure
436
if a condition is not met. These sanity checks, which ensure that the program
437
is in a known good state, are there to fail at the earliest possible time after
438
some program state is corrupted. If the assertion checks the wrong condition,
439
then the program may proceed in an erroneous state, which could lead to memory
440
corruption, security holes, or worse. Hence it is vitally important to test
441
that such assertion statements work as expected.
442
 
443
Since these precondition checks cause the processes to die, we call such tests
444
_death tests_. More generally, any test that checks that a program terminates
445
in an expected fashion is also a death test.
446
 
447
If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see [Catching Failures](#catching-failures).
448
 
449
## How to Write a Death Test ##
450
 
451
Google Test has the following macros to support death tests:
452
 
453
| **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
454
|:--------------------|:-----------------------|:-------------|
455
| `ASSERT_DEATH(`_statement, regex_`); | `EXPECT_DEATH(`_statement, regex_`); | _statement_ crashes with the given error |
456
| `ASSERT_DEATH_IF_SUPPORTED(`_statement, regex_`); | `EXPECT_DEATH_IF_SUPPORTED(`_statement, regex_`); | if death tests are supported, verifies that _statement_ crashes with the given error; otherwise verifies nothing |
457
| `ASSERT_EXIT(`_statement, predicate, regex_`); | `EXPECT_EXIT(`_statement, predicate, regex_`); |_statement_ exits with the given error and its exit code matches _predicate_ |
458
 
459
where _statement_ is a statement that is expected to cause the process to
460
die, _predicate_ is a function or function object that evaluates an integer
461
exit status, and _regex_ is a regular expression that the stderr output of
462
_statement_ is expected to match. Note that _statement_ can be _any valid
463
statement_ (including _compound statement_) and doesn't have to be an
464
expression.
465
 
466
As usual, the `ASSERT` variants abort the current test function, while the
467
`EXPECT` variants do not.
468
 
469
**Note:** We use the word "crash" here to mean that the process
470
terminates with a _non-zero_ exit status code.  There are two
471
possibilities: either the process has called `exit()` or `_exit()`
472
with a non-zero value, or it may be killed by a signal.
473
 
474
This means that if _statement_ terminates the process with a 0 exit
475
code, it is _not_ considered a crash by `EXPECT_DEATH`.  Use
476
`EXPECT_EXIT` instead if this is the case, or if you want to restrict
477
the exit code more precisely.
478
 
479
A predicate here must accept an `int` and return a `bool`. The death test
480
succeeds only if the predicate returns `true`. Google Test defines a few
481
predicates that handle the most common cases:
482
 
483
```
484
::testing::ExitedWithCode(exit_code)
485
```
486
 
487
This expression is `true` if the program exited normally with the given exit
488
code.
489
 
490
```
491
::testing::KilledBySignal(signal_number)  // Not available on Windows.
492
```
493
 
494
This expression is `true` if the program was killed by the given signal.
495
 
496
The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate
497
that verifies the process' exit code is non-zero.
498
 
499
Note that a death test only cares about three things:
500
 
501
  1. does _statement_ abort or exit the process?
502
  1. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status satisfy _predicate_?  Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`) is the exit status non-zero?  And
503
  1. does the stderr output match _regex_?
504
 
505
In particular, if _statement_ generates an `ASSERT_*` or `EXPECT_*` failure, it will **not** cause the death test to fail, as Google Test assertions don't abort the process.
506
 
507
To write a death test, simply use one of the above macros inside your test
508
function. For example,
509
 
510
```
511
TEST(My*DeathTest*, Foo) {
512
  // This death test uses a compound statement.
513
  ASSERT_DEATH({ int n = 5; Foo(&n); }, "Error on line .* of Foo()");
514
}
515
TEST(MyDeathTest, NormalExit) {
516
  EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success");
517
}
518
TEST(MyDeathTest, KillMyself) {
519
  EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL), "Sending myself unblockable signal");
520
}
521
```
522
 
523
verifies that:
524
 
525
  * calling `Foo(5)` causes the process to die with the given error message,
526
  * calling `NormalExit()` causes the process to print `"Success"` to stderr and exit with exit code 0, and
527
  * calling `KillMyself()` kills the process with signal `SIGKILL`.
528
 
529
The test function body may contain other assertions and statements as well, if
530
necessary.
531
 
532
_Important:_ We strongly recommend you to follow the convention of naming your
533
test case (not test) `*DeathTest` when it contains a death test, as
534
demonstrated in the above example. The `Death Tests And Threads` section below
535
explains why.
536
 
537
If a test fixture class is shared by normal tests and death tests, you
538
can use typedef to introduce an alias for the fixture class and avoid
539
duplicating its code:
540
```
541
class FooTest : public ::testing::Test { ... };
542
 
543
typedef FooTest FooDeathTest;
544
 
545
TEST_F(FooTest, DoesThis) {
546
  // normal test
547
}
548
 
549
TEST_F(FooDeathTest, DoesThat) {
550
  // death test
551
}
552
```
553
 
554
_Availability:_ Linux, Windows (requires MSVC 8.0 or above), Cygwin, and Mac (the latter three are supported since v1.3.0).  `(ASSERT|EXPECT)_DEATH_IF_SUPPORTED` are new in v1.4.0.
555
 
556
## Regular Expression Syntax ##
557
 
558
On POSIX systems (e.g. Linux, Cygwin, and Mac), Google Test uses the
559
[POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
560
syntax in death tests. To learn about this syntax, you may want to read this [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
561
 
562
On Windows, Google Test uses its own simple regular expression
563
implementation. It lacks many features you can find in POSIX extended
564
regular expressions.  For example, we don't support union (`"x|y"`),
565
grouping (`"(xy)"`), brackets (`"[xy]"`), and repetition count
566
(`"x{5,7}"`), among others. Below is what we do support (`A` denotes a
567
literal character, period (`.`), or a single `\\` escape sequence; `x`
568
and `y` denote regular expressions.):
569
 
570
| `c` | matches any literal character `c` |
571
|:----|:----------------------------------|
572
| `\\d` | matches any decimal digit         |
573
| `\\D` | matches any character that's not a decimal digit |
574
| `\\f` | matches `\f`                      |
575
| `\\n` | matches `\n`                      |
576
| `\\r` | matches `\r`                      |
577
| `\\s` | matches any ASCII whitespace, including `\n` |
578
| `\\S` | matches any character that's not a whitespace |
579
| `\\t` | matches `\t`                      |
580
| `\\v` | matches `\v`                      |
581
| `\\w` | matches any letter, `_`, or decimal digit |
582
| `\\W` | matches any character that `\\w` doesn't match |
583
| `\\c` | matches any literal character `c`, which must be a punctuation |
584
| `.` | matches any single character except `\n` |
585
| `A?` | matches 0 or 1 occurrences of `A` |
586
| `A*` | matches 0 or many occurrences of `A` |
587
| `A+` | matches 1 or many occurrences of `A` |
588
| `^` | matches the beginning of a string (not that of each line) |
589
| `$` | matches the end of a string (not that of each line) |
590
| `xy` | matches `x` followed by `y`       |
591
 
592
To help you determine which capability is available on your system,
593
Google Test defines macro `GTEST_USES_POSIX_RE=1` when it uses POSIX
594
extended regular expressions, or `GTEST_USES_SIMPLE_RE=1` when it uses
595
the simple version.  If you want your death tests to work in both
596
cases, you can either `#if` on these macros or use the more limited
597
syntax only.
598
 
599
## How It Works ##
600
 
601
Under the hood, `ASSERT_EXIT()` spawns a new process and executes the
602
death test statement in that process. The details of of how precisely
603
that happens depend on the platform and the variable
604
`::testing::GTEST_FLAG(death_test_style)` (which is initialized from the
605
command-line flag `--gtest_death_test_style`).
606
 
607
  * On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the child, after which:
608
    * If the variable's value is `"fast"`, the death test statement is immediately executed.
609
    * If the variable's value is `"threadsafe"`, the child process re-executes the unit test binary just as it was originally invoked, but with some extra flags to cause just the single death test under consideration to be run.
610
  * On Windows, the child is spawned using the `CreateProcess()` API, and re-executes the binary to cause just the single death test under consideration to be run - much like the `threadsafe` mode on POSIX.
611
 
612
Other values for the variable are illegal and will cause the death test to
613
fail. Currently, the flag's default value is `"fast"`. However, we reserve the
614
right to change it in the future. Therefore, your tests should not depend on
615
this.
616
 
617
In either case, the parent process waits for the child process to complete, and checks that
618
 
619
  1. the child's exit status satisfies the predicate, and
620
  1. the child's stderr matches the regular expression.
621
 
622
If the death test statement runs to completion without dying, the child
623
process will nonetheless terminate, and the assertion fails.
624
 
625
## Death Tests And Threads ##
626
 
627
The reason for the two death test styles has to do with thread safety. Due to
628
well-known problems with forking in the presence of threads, death tests should
629
be run in a single-threaded context. Sometimes, however, it isn't feasible to
630
arrange that kind of environment. For example, statically-initialized modules
631
may start threads before main is ever reached. Once threads have been created,
632
it may be difficult or impossible to clean them up.
633
 
634
Google Test has three features intended to raise awareness of threading issues.
635
 
636
  1. A warning is emitted if multiple threads are running when a death test is encountered.
637
  1. Test cases with a name ending in "DeathTest" are run before all other tests.
638
  1. It uses `clone()` instead of `fork()` to spawn the child process on Linux (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely to cause the child to hang when the parent process has multiple threads.
639
 
640
It's perfectly fine to create threads inside a death test statement; they are
641
executed in a separate process and cannot affect the parent.
642
 
643
## Death Test Styles ##
644
 
645
The "threadsafe" death test style was introduced in order to help mitigate the
646
risks of testing in a possibly multithreaded environment. It trades increased
647
test execution time (potentially dramatically so) for improved thread safety.
648
We suggest using the faster, default "fast" style unless your test has specific
649
problems with it.
650
 
651
You can choose a particular style of death tests by setting the flag
652
programmatically:
653
 
654
```
655
::testing::FLAGS_gtest_death_test_style = "threadsafe";
656
```
657
 
658
You can do this in `main()` to set the style for all death tests in the
659
binary, or in individual tests. Recall that flags are saved before running each
660
test and restored afterwards, so you need not do that yourself. For example:
661
 
662
```
663
TEST(MyDeathTest, TestOne) {
664
  ::testing::FLAGS_gtest_death_test_style = "threadsafe";
665
  // This test is run in the "threadsafe" style:
666
  ASSERT_DEATH(ThisShouldDie(), "");
667
}
668
 
669
TEST(MyDeathTest, TestTwo) {
670
  // This test is run in the "fast" style:
671
  ASSERT_DEATH(ThisShouldDie(), "");
672
}
673
 
674
int main(int argc, char** argv) {
675
  ::testing::InitGoogleTest(&argc, argv);
676
  ::testing::FLAGS_gtest_death_test_style = "fast";
677
  return RUN_ALL_TESTS();
678
}
679
```
680
 
681
## Caveats ##
682
 
683
The _statement_ argument of `ASSERT_EXIT()` can be any valid C++ statement
684
except that it can not return from the current function. This means
685
_statement_ should not contain `return` or a macro that might return (e.g.
686
`ASSERT_TRUE()` ). If _statement_ returns before it crashes, Google Test will
687
print an error message, and the test will fail.
688
 
689
Since _statement_ runs in the child process, any in-memory side effect (e.g.
690
modifying a variable, releasing memory, etc) it causes will _not_ be observable
691
in the parent process. In particular, if you release memory in a death test,
692
your program will fail the heap check as the parent process will never see the
693
memory reclaimed. To solve this problem, you can
694
 
695
  1. try not to free memory in a death test;
696
  1. free the memory again in the parent process; or
697
  1. do not use the heap checker in your program.
698
 
699
Due to an implementation detail, you cannot place multiple death test
700
assertions on the same line; otherwise, compilation will fail with an unobvious
701
error message.
702
 
703
Despite the improved thread safety afforded by the "threadsafe" style of death
704
test, thread problems such as deadlock are still possible in the presence of
705
handlers registered with `pthread_atfork(3)`.
706
 
707
# Using Assertions in Sub-routines #
708
 
709
## Adding Traces to Assertions ##
710
 
711
If a test sub-routine is called from several places, when an assertion
712
inside it fails, it can be hard to tell which invocation of the
713
sub-routine the failure is from.  You can alleviate this problem using
714
extra logging or custom failure messages, but that usually clutters up
715
your tests. A better solution is to use the `SCOPED_TRACE` macro:
716
 
717
| `SCOPED_TRACE(`_message_`);` |
718
|:-----------------------------|
719
 
720
where _message_ can be anything streamable to `std::ostream`. This
721
macro will cause the current file name, line number, and the given
722
message to be added in every failure message. The effect will be
723
undone when the control leaves the current lexical scope.
724
 
725
For example,
726
 
727
```
728
10: void Sub1(int n) {
729
11:   EXPECT_EQ(1, Bar(n));
730
12:   EXPECT_EQ(2, Bar(n + 1));
731
13: }
732
14:
733
15: TEST(FooTest, Bar) {
734
16:   {
735
17:     SCOPED_TRACE("A");  // This trace point will be included in
736
18:                         // every failure in this scope.
737
19:     Sub1(1);
738
20:   }
739
21:   // Now it won't.
740
22:   Sub1(9);
741
23: }
742
```
743
 
744
could result in messages like these:
745
 
746
```
747
path/to/foo_test.cc:11: Failure
748
Value of: Bar(n)
749
Expected: 1
750
  Actual: 2
751
   Trace:
752
path/to/foo_test.cc:17: A
753
 
754
path/to/foo_test.cc:12: Failure
755
Value of: Bar(n + 1)
756
Expected: 2
757
  Actual: 3
758
```
759
 
760
Without the trace, it would've been difficult to know which invocation
761
of `Sub1()` the two failures come from respectively. (You could add an
762
extra message to each assertion in `Sub1()` to indicate the value of
763
`n`, but that's tedious.)
764
 
765
Some tips on using `SCOPED_TRACE`:
766
 
767
  1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the beginning of a sub-routine, instead of at each call site.
768
  1. When calling sub-routines inside a loop, make the loop iterator part of the message in `SCOPED_TRACE` such that you can know which iteration the failure is from.
769
  1. Sometimes the line number of the trace point is enough for identifying the particular invocation of a sub-routine. In this case, you don't have to choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
770
  1. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer scope. In this case, all active trace points will be included in the failure messages, in reverse order they are encountered.
771
  1. The trace dump is clickable in Emacs' compilation buffer - hit return on a line number and you'll be taken to that line in the source file!
772
 
773
_Availability:_ Linux, Windows, Mac.
774
 
775
## Propagating Fatal Failures ##
776
 
777
A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
778
when they fail they only abort the _current function_, not the entire test. For
779
example, the following test will segfault:
780
```
781
void Subroutine() {
782
  // Generates a fatal failure and aborts the current function.
783
  ASSERT_EQ(1, 2);
784
  // The following won't be executed.
785
  ...
786
}
787
 
788
TEST(FooTest, Bar) {
789
  Subroutine();
790
  // The intended behavior is for the fatal failure
791
  // in Subroutine() to abort the entire test.
792
  // The actual behavior: the function goes on after Subroutine() returns.
793
  int* p = NULL;
794
  *p = 3; // Segfault!
795
}
796
```
797
 
798
Since we don't use exceptions, it is technically impossible to
799
implement the intended behavior here.  To alleviate this, Google Test
800
provides two solutions.  You could use either the
801
`(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
802
`HasFatalFailure()` function.  They are described in the following two
803
subsections.
804
 
805
 
806
 
807
### Asserting on Subroutines ###
808
 
809
As shown above, if your test calls a subroutine that has an `ASSERT_*`
810
failure in it, the test will continue after the subroutine
811
returns. This may not be what you want.
812
 
813
Often people want fatal failures to propagate like exceptions.  For
814
that Google Test offers the following macros:
815
 
816
| **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
817
|:--------------------|:-----------------------|:-------------|
818
| `ASSERT_NO_FATAL_FAILURE(`_statement_`);` | `EXPECT_NO_FATAL_FAILURE(`_statement_`);` | _statement_ doesn't generate any new fatal failures in the current thread. |
819
 
820
Only failures in the thread that executes the assertion are checked to
821
determine the result of this type of assertions.  If _statement_
822
creates new threads, failures in these threads are ignored.
823
 
824
Examples:
825
 
826
```
827
ASSERT_NO_FATAL_FAILURE(Foo());
828
 
829
int i;
830
EXPECT_NO_FATAL_FAILURE({
831
  i = Bar();
832
});
833
```
834
 
835
_Availability:_ Linux, Windows, Mac. Assertions from multiple threads
836
are currently not supported.
837
 
838
### Checking for Failures in the Current Test ###
839
 
840
`HasFatalFailure()` in the `::testing::Test` class returns `true` if an
841
assertion in the current test has suffered a fatal failure. This
842
allows functions to catch fatal failures in a sub-routine and return
843
early.
844
 
845
```
846
class Test {
847
 public:
848
  ...
849
  static bool HasFatalFailure();
850
};
851
```
852
 
853
The typical usage, which basically simulates the behavior of a thrown
854
exception, is:
855
 
856
```
857
TEST(FooTest, Bar) {
858
  Subroutine();
859
  // Aborts if Subroutine() had a fatal failure.
860
  if (HasFatalFailure())
861
    return;
862
  // The following won't be executed.
863
  ...
864
}
865
```
866
 
867
If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
868
fixture, you must add the `::testing::Test::` prefix, as in:
869
 
870
```
871
if (::testing::Test::HasFatalFailure())
872
  return;
873
```
874
 
875
Similarly, `HasNonfatalFailure()` returns `true` if the current test
876
has at least one non-fatal failure, and `HasFailure()` returns `true`
877
if the current test has at least one failure of either kind.
878
 
879
_Availability:_ Linux, Windows, Mac.  `HasNonfatalFailure()` and
880
`HasFailure()` are available since version 1.4.0.
881
 
882
# Logging Additional Information #
883
 
884
In your test code, you can call `RecordProperty("key", value)` to log
885
additional information, where `value` can be either a C string or a 32-bit
886
integer. The _last_ value recorded for a key will be emitted to the XML output
887
if you specify one. For example, the test
888
 
889
```
890
TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
891
  RecordProperty("MaximumWidgets", ComputeMaxUsage());
892
  RecordProperty("MinimumWidgets", ComputeMinUsage());
893
}
894
```
895
 
896
will output XML like this:
897
 
898
```
899
...
900
  
901
            MaximumWidgets="12"
902
            MinimumWidgets="9" />
903
...
904
```
905
 
906
_Note_:
907
  * `RecordProperty()` is a static member of the `Test` class. Therefore it needs to be prefixed with `::testing::Test::` if used outside of the `TEST` body and the test fixture class.
908
  * `key` must be a valid XML attribute name, and cannot conflict with the ones already used by Google Test (`name`, `status`,     `time`, and `classname`).
909
 
910
_Availability_: Linux, Windows, Mac.
911
 
912
# Sharing Resources Between Tests in the Same Test Case #
913
 
914
 
915
 
916
Google Test creates a new test fixture object for each test in order to make
917
tests independent and easier to debug. However, sometimes tests use resources
918
that are expensive to set up, making the one-copy-per-test model prohibitively
919
expensive.
920
 
921
If the tests don't change the resource, there's no harm in them sharing a
922
single resource copy. So, in addition to per-test set-up/tear-down, Google Test
923
also supports per-test-case set-up/tear-down. To use it:
924
 
925
  1. In your test fixture class (say `FooTest` ), define as `static` some member variables to hold the shared resources.
926
  1. In the same test fixture class, define a `static void SetUpTestCase()` function (remember not to spell it as **`SetupTestCase`** with a small `u`!) to set up the shared resources and a `static void TearDownTestCase()` function to tear them down.
927
 
928
That's it! Google Test automatically calls `SetUpTestCase()` before running the
929
_first test_ in the `FooTest` test case (i.e. before creating the first
930
`FooTest` object), and calls `TearDownTestCase()` after running the _last test_
931
in it (i.e. after deleting the last `FooTest` object). In between, the tests
932
can use the shared resources.
933
 
934
Remember that the test order is undefined, so your code can't depend on a test
935
preceding or following another. Also, the tests must either not modify the
936
state of any shared resource, or, if they do modify the state, they must
937
restore the state to its original value before passing control to the next
938
test.
939
 
940
Here's an example of per-test-case set-up and tear-down:
941
```
942
class FooTest : public ::testing::Test {
943
 protected:
944
  // Per-test-case set-up.
945
  // Called before the first test in this test case.
946
  // Can be omitted if not needed.
947
  static void SetUpTestCase() {
948
    shared_resource_ = new ...;
949
  }
950
 
951
  // Per-test-case tear-down.
952
  // Called after the last test in this test case.
953
  // Can be omitted if not needed.
954
  static void TearDownTestCase() {
955
    delete shared_resource_;
956
    shared_resource_ = NULL;
957
  }
958
 
959
  // You can define per-test set-up and tear-down logic as usual.
960
  virtual void SetUp() { ... }
961
  virtual void TearDown() { ... }
962
 
963
  // Some expensive resource shared by all tests.
964
  static T* shared_resource_;
965
};
966
 
967
T* FooTest::shared_resource_ = NULL;
968
 
969
TEST_F(FooTest, Test1) {
970
  ... you can refer to shared_resource here ...
971
}
972
TEST_F(FooTest, Test2) {
973
  ... you can refer to shared_resource here ...
974
}
975
```
976
 
977
_Availability:_ Linux, Windows, Mac.
978
 
979
# Global Set-Up and Tear-Down #
980
 
981
Just as you can do set-up and tear-down at the test level and the test case
982
level, you can also do it at the test program level. Here's how.
983
 
984
First, you subclass the `::testing::Environment` class to define a test
985
environment, which knows how to set-up and tear-down:
986
 
987
```
988
class Environment {
989
 public:
990
  virtual ~Environment() {}
991
  // Override this to define how to set up the environment.
992
  virtual void SetUp() {}
993
  // Override this to define how to tear down the environment.
994
  virtual void TearDown() {}
995
};
996
```
997
 
998
Then, you register an instance of your environment class with Google Test by
999
calling the `::testing::AddGlobalTestEnvironment()` function:
1000
 
1001
```
1002
Environment* AddGlobalTestEnvironment(Environment* env);
1003
```
1004
 
1005
Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
1006
the environment object, then runs the tests if there was no fatal failures, and
1007
finally calls `TearDown()` of the environment object.
1008
 
1009
It's OK to register multiple environment objects. In this case, their `SetUp()`
1010
will be called in the order they are registered, and their `TearDown()` will be
1011
called in the reverse order.
1012
 
1013
Note that Google Test takes ownership of the registered environment objects.
1014
Therefore **do not delete them** by yourself.
1015
 
1016
You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is
1017
called, probably in `main()`. If you use `gtest_main`, you need to      call
1018
this before `main()` starts for it to take effect. One way to do this is to
1019
define a global variable like this:
1020
 
1021
```
1022
::testing::Environment* const foo_env = ::testing::AddGlobalTestEnvironment(new FooEnvironment);
1023
```
1024
 
1025
However, we strongly recommend you to write your own `main()` and call
1026
`AddGlobalTestEnvironment()` there, as relying on initialization of global
1027
variables makes the code harder to read and may cause problems when you
1028
register multiple environments from different translation units and the
1029
environments have dependencies among them (remember that the compiler doesn't
1030
guarantee the order in which global variables from different translation units
1031
are initialized).
1032
 
1033
_Availability:_ Linux, Windows, Mac.
1034
 
1035
 
1036
# Value Parameterized Tests #
1037
 
1038
_Value-parameterized tests_ allow you to test your code with different
1039
parameters without writing multiple copies of the same test.
1040
 
1041
Suppose you write a test for your code and then realize that your code is affected by a presence of a Boolean command line flag.
1042
 
1043
```
1044
TEST(MyCodeTest, TestFoo) {
1045
  // A code to test foo().
1046
}
1047
```
1048
 
1049
Usually people factor their test code into a function with a Boolean parameter in such situations. The function sets the flag, then executes the testing code.
1050
 
1051
```
1052
void TestFooHelper(bool flag_value) {
1053
  flag = flag_value;
1054
  // A code to test foo().
1055
}
1056
 
1057
TEST(MyCodeTest, TestFooo) {
1058
  TestFooHelper(false);
1059
  TestFooHelper(true);
1060
}
1061
```
1062
 
1063
But this setup has serious drawbacks. First, when a test assertion fails in your tests, it becomes unclear what value of the parameter caused it to fail. You can stream a clarifying message into your `EXPECT`/`ASSERT` statements, but it you'll have to do it with all of them. Second, you have to add one such helper function per test. What if you have ten tests? Twenty? A hundred?
1064
 
1065
Value-parameterized tests will let you write your test only once and then easily instantiate and run it with an arbitrary number of parameter values.
1066
 
1067
Here are some other situations when value-parameterized tests come handy:
1068
 
1069
  * You wan to test different implementations of an OO interface.
1070
  * You want to test your code over various inputs (a.k.a. data-driven testing). This feature is easy to abuse, so please exercise your good sense when doing it!
1071
 
1072
## How to Write Value-Parameterized Tests ##
1073
 
1074
To write value-parameterized tests, first you should define a fixture
1075
class. It must be derived from `::testing::TestWithParam`, where `T`
1076
is the type of your parameter values. `TestWithParam` is itself
1077
derived from `::testing::Test`. `T` can be any copyable type. If it's
1078
a raw pointer, you are responsible for managing the lifespan of the
1079
pointed values.
1080
 
1081
```
1082
class FooTest : public ::testing::TestWithParam {
1083
  // You can implement all the usual fixture class members here.
1084
  // To access the test parameter, call GetParam() from class
1085
  // TestWithParam.
1086
};
1087
```
1088
 
1089
Then, use the `TEST_P` macro to define as many test patterns using
1090
this fixture as you want.  The `_P` suffix is for "parameterized" or
1091
"pattern", whichever you prefer to think.
1092
 
1093
```
1094
TEST_P(FooTest, DoesBlah) {
1095
  // Inside a test, access the test parameter with the GetParam() method
1096
  // of the TestWithParam class:
1097
  EXPECT_TRUE(foo.Blah(GetParam()));
1098
  ...
1099
}
1100
 
1101
TEST_P(FooTest, HasBlahBlah) {
1102
  ...
1103
}
1104
```
1105
 
1106
Finally, you can use `INSTANTIATE_TEST_CASE_P` to instantiate the test
1107
case with any set of parameters you want. Google Test defines a number of
1108
functions for generating test parameters. They return what we call
1109
(surprise!) _parameter generators_. Here is a summary of them,
1110
which are all in the `testing` namespace:
1111
 
1112
| `Range(begin, end[, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. |
1113
|:----------------------------|:------------------------------------------------------------------------------------------------------------------|
1114
| `Values(v1, v2, ..., vN)`   | Yields values `{v1, v2, ..., vN}`.                                                                                |
1115
| `ValuesIn(container)` and `ValuesIn(begin, end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)`.                  |
1116
| `Bool()`                    | Yields sequence `{false, true}`.                                                                                  |
1117
| `Combine(g1, g2, ..., gN)`  | Yields all combinations (the Cartesian product for the math savvy) of the values generated by the `N` generators. This is only available if your system provides the `` header. If you are sure your system does, and Google Test disagrees, you can override it by defining `GTEST_HAS_TR1_TUPLE=1`. See comments in [include/gtest/internal/gtest-port.h](../include/gtest/internal/gtest-port.h) for more information. |
1118
 
1119
For more details, see the comments at the definitions of these functions in the [source code](../include/gtest/gtest-param-test.h).
1120
 
1121
The following statement will instantiate tests from the `FooTest` test case
1122
each with parameter values `"meeny"`, `"miny"`, and `"moe"`.
1123
 
1124
```
1125
INSTANTIATE_TEST_CASE_P(InstantiationName,
1126
                        FooTest,
1127
                        ::testing::Values("meeny", "miny", "moe"));
1128
```
1129
 
1130
To distinguish different instances of the pattern (yes, you can
1131
instantiate it more than once), the first argument to
1132
`INSTANTIATE_TEST_CASE_P` is a prefix that will be added to the actual
1133
test case name. Remember to pick unique prefixes for different
1134
instantiations. The tests from the instantiation above will have these
1135
names:
1136
 
1137
  * `InstantiationName/FooTest.DoesBlah/0` for `"meeny"`
1138
  * `InstantiationName/FooTest.DoesBlah/1` for `"miny"`
1139
  * `InstantiationName/FooTest.DoesBlah/2` for `"moe"`
1140
  * `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"`
1141
  * `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"`
1142
  * `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"`
1143
 
1144
You can use these names in [--gtest\-filter](#running-a-subset-of-the-tests).
1145
 
1146
This statement will instantiate all tests from `FooTest` again, each
1147
with parameter values `"cat"` and `"dog"`:
1148
 
1149
```
1150
const char* pets[] = {"cat", "dog"};
1151
INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest,
1152
                        ::testing::ValuesIn(pets));
1153
```
1154
 
1155
The tests from the instantiation above will have these names:
1156
 
1157
  * `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"`
1158
  * `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"`
1159
  * `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"`
1160
  * `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"`
1161
 
1162
Please note that `INSTANTIATE_TEST_CASE_P` will instantiate _all_
1163
tests in the given test case, whether their definitions come before or
1164
_after_ the `INSTANTIATE_TEST_CASE_P` statement.
1165
 
1166
You can see
1167
[these](../samples/sample7_unittest.cc)
1168
[files](../samples/sample8_unittest.cc) for more examples.
1169
 
1170
_Availability_: Linux, Windows (requires MSVC 8.0 or above), Mac; since version 1.2.0.
1171
 
1172
## Creating Value-Parameterized Abstract Tests ##
1173
 
1174
In the above, we define and instantiate `FooTest` in the same source
1175
file. Sometimes you may want to define value-parameterized tests in a
1176
library and let other people instantiate them later. This pattern is
1177
known as abstract tests. As an example of its application, when you
1178
are designing an interface you can write a standard suite of abstract
1179
tests (perhaps using a factory function as the test parameter) that
1180
all implementations of the interface are expected to pass. When
1181
someone implements the interface, he can instantiate your suite to get
1182
all the interface-conformance tests for free.
1183
 
1184
To define abstract tests, you should organize your code like this:
1185
 
1186
  1. Put the definition of the parameterized test fixture class (e.g. `FooTest`) in a header file, say `foo_param_test.h`. Think of this as _declaring_ your abstract tests.
1187
  1. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes `foo_param_test.h`. Think of this as _implementing_ your abstract tests.
1188
 
1189
Once they are defined, you can instantiate them by including
1190
`foo_param_test.h`, invoking `INSTANTIATE_TEST_CASE_P()`, and linking
1191
with `foo_param_test.cc`. You can instantiate the same abstract test
1192
case multiple times, possibly in different source files.
1193
 
1194
# Typed Tests #
1195
 
1196
Suppose you have multiple implementations of the same interface and
1197
want to make sure that all of them satisfy some common requirements.
1198
Or, you may have defined several types that are supposed to conform to
1199
the same "concept" and you want to verify it.  In both cases, you want
1200
the same test logic repeated for different types.
1201
 
1202
While you can write one `TEST` or `TEST_F` for each type you want to
1203
test (and you may even factor the test logic into a function template
1204
that you invoke from the `TEST`), it's tedious and doesn't scale:
1205
if you want _m_ tests over _n_ types, you'll end up writing _m\*n_
1206
`TEST`s.
1207
 
1208
_Typed tests_ allow you to repeat the same test logic over a list of
1209
types.  You only need to write the test logic once, although you must
1210
know the type list when writing typed tests.  Here's how you do it:
1211
 
1212
First, define a fixture class template.  It should be parameterized
1213
by a type.  Remember to derive it from `::testing::Test`:
1214
 
1215
```
1216
template 
1217
class FooTest : public ::testing::Test {
1218
 public:
1219
  ...
1220
  typedef std::list List;
1221
  static T shared_;
1222
  T value_;
1223
};
1224
```
1225
 
1226
Next, associate a list of types with the test case, which will be
1227
repeated for each type in the list:
1228
 
1229
```
1230
typedef ::testing::Types MyTypes;
1231
TYPED_TEST_CASE(FooTest, MyTypes);
1232
```
1233
 
1234
The `typedef` is necessary for the `TYPED_TEST_CASE` macro to parse
1235
correctly.  Otherwise the compiler will think that each comma in the
1236
type list introduces a new macro argument.
1237
 
1238
Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test
1239
for this test case.  You can repeat this as many times as you want:
1240
 
1241
```
1242
TYPED_TEST(FooTest, DoesBlah) {
1243
  // Inside a test, refer to the special name TypeParam to get the type
1244
  // parameter.  Since we are inside a derived class template, C++ requires
1245
  // us to visit the members of FooTest via 'this'.
1246
  TypeParam n = this->value_;
1247
 
1248
  // To visit static members of the fixture, add the 'TestFixture::'
1249
  // prefix.
1250
  n += TestFixture::shared_;
1251
 
1252
  // To refer to typedefs in the fixture, add the 'typename TestFixture::'
1253
  // prefix.  The 'typename' is required to satisfy the compiler.
1254
  typename TestFixture::List values;
1255
  values.push_back(n);
1256
  ...
1257
}
1258
 
1259
TYPED_TEST(FooTest, HasPropertyA) { ... }
1260
```
1261
 
1262
You can see `samples/sample6_unittest.cc` for a complete example.
1263
 
1264
_Availability:_ Linux, Windows (requires MSVC 8.0 or above), Mac;
1265
since version 1.1.0.
1266
 
1267
# Type-Parameterized Tests #
1268
 
1269
_Type-parameterized tests_ are like typed tests, except that they
1270
don't require you to know the list of types ahead of time.  Instead,
1271
you can define the test logic first and instantiate it with different
1272
type lists later.  You can even instantiate it more than once in the
1273
same program.
1274
 
1275
If you are designing an interface or concept, you can define a suite
1276
of type-parameterized tests to verify properties that any valid
1277
implementation of the interface/concept should have.  Then, the author
1278
of each implementation can just instantiate the test suite with his
1279
type to verify that it conforms to the requirements, without having to
1280
write similar tests repeatedly.  Here's an example:
1281
 
1282
First, define a fixture class template, as we did with typed tests:
1283
 
1284
```
1285
template 
1286
class FooTest : public ::testing::Test {
1287
  ...
1288
};
1289
```
1290
 
1291
Next, declare that you will define a type-parameterized test case:
1292
 
1293
```
1294
TYPED_TEST_CASE_P(FooTest);
1295
```
1296
 
1297
The `_P` suffix is for "parameterized" or "pattern", whichever you
1298
prefer to think.
1299
 
1300
Then, use `TYPED_TEST_P()` to define a type-parameterized test.  You
1301
can repeat this as many times as you want:
1302
 
1303
```
1304
TYPED_TEST_P(FooTest, DoesBlah) {
1305
  // Inside a test, refer to TypeParam to get the type parameter.
1306
  TypeParam n = 0;
1307
  ...
1308
}
1309
 
1310
TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1311
```
1312
 
1313
Now the tricky part: you need to register all test patterns using the
1314
`REGISTER_TYPED_TEST_CASE_P` macro before you can instantiate them.
1315
The first argument of the macro is the test case name; the rest are
1316
the names of the tests in this test case:
1317
 
1318
```
1319
REGISTER_TYPED_TEST_CASE_P(FooTest,
1320
                           DoesBlah, HasPropertyA);
1321
```
1322
 
1323
Finally, you are free to instantiate the pattern with the types you
1324
want.  If you put the above code in a header file, you can `#include`
1325
it in multiple C++ source files and instantiate it multiple times.
1326
 
1327
```
1328
typedef ::testing::Types MyTypes;
1329
INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes);
1330
```
1331
 
1332
To distinguish different instances of the pattern, the first argument
1333
to the `INSTANTIATE_TYPED_TEST_CASE_P` macro is a prefix that will be
1334
added to the actual test case name.  Remember to pick unique prefixes
1335
for different instances.
1336
 
1337
In the special case where the type list contains only one type, you
1338
can write that type directly without `::testing::Types<...>`, like this:
1339
 
1340
```
1341
INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int);
1342
```
1343
 
1344
You can see `samples/sample6_unittest.cc` for a complete example.
1345
 
1346
_Availability:_ Linux, Windows (requires MSVC 8.0 or above), Mac;
1347
since version 1.1.0.
1348
 
1349
# Testing Private Code #
1350
 
1351
If you change your software's internal implementation, your tests should not
1352
break as long as the change is not observable by users. Therefore, per the
1353
_black-box testing principle_, most of the time you should test your code
1354
through its public interfaces.
1355
 
1356
If you still find yourself needing to test internal implementation code,
1357
consider if there's a better design that wouldn't require you to do so. If you
1358
absolutely have to test non-public interface code though, you can. There are
1359
two cases to consider:
1360
 
1361
  * Static functions (_not_ the same as static member functions!) or unnamed namespaces, and
1362
  * Private or protected class members
1363
 
1364
## Static Functions ##
1365
 
1366
Both static functions and definitions/declarations in an unnamed namespace are
1367
only visible within the same translation unit. To test them, you can `#include`
1368
the entire `.cc` file being tested in your `*_test.cc` file. (`#include`ing `.cc`
1369
files is not a good way to reuse code - you should not do this in production
1370
code!)
1371
 
1372
However, a better approach is to move the private code into the
1373
`foo::internal` namespace, where `foo` is the namespace your project normally
1374
uses, and put the private declarations in a `*-internal.h` file. Your
1375
production `.cc` files and your tests are allowed to include this internal
1376
header, but your clients are not. This way, you can fully test your internal
1377
implementation without leaking it to your clients.
1378
 
1379
## Private Class Members ##
1380
 
1381
Private class members are only accessible from within the class or by friends.
1382
To access a class' private members, you can declare your test fixture as a
1383
friend to the class and define accessors in your fixture. Tests using the
1384
fixture can then access the private members of your production class via the
1385
accessors in the fixture. Note that even though your fixture is a friend to
1386
your production class, your tests are not automatically friends to it, as they
1387
are technically defined in sub-classes of the fixture.
1388
 
1389
Another way to test private members is to refactor them into an implementation
1390
class, which is then declared in a `*-internal.h` file. Your clients aren't
1391
allowed to include this header but your tests can. Such is called the Pimpl
1392
(Private Implementation) idiom.
1393
 
1394
Or, you can declare an individual test as a friend of your class by adding this
1395
line in the class body:
1396
 
1397
```
1398
FRIEND_TEST(TestCaseName, TestName);
1399
```
1400
 
1401
For example,
1402
```
1403
// foo.h
1404
#include 
1405
 
1406
// Defines FRIEND_TEST.
1407
class Foo {
1408
  ...
1409
 private:
1410
  FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
1411
  int Bar(void* x);
1412
};
1413
 
1414
// foo_test.cc
1415
...
1416
TEST(FooTest, BarReturnsZeroOnNull) {
1417
  Foo foo;
1418
  EXPECT_EQ(0, foo.Bar(NULL));
1419
  // Uses Foo's private member Bar().
1420
}
1421
```
1422
 
1423
Pay special attention when your class is defined in a namespace, as you should
1424
define your test fixtures and tests in the same namespace if you want them to
1425
be friends of your class. For example, if the code to be tested looks like:
1426
 
1427
```
1428
namespace my_namespace {
1429
 
1430
class Foo {
1431
  friend class FooTest;
1432
  FRIEND_TEST(FooTest, Bar);
1433
  FRIEND_TEST(FooTest, Baz);
1434
  ...
1435
  definition of the class Foo
1436
  ...
1437
};
1438
 
1439
}  // namespace my_namespace
1440
```
1441
 
1442
Your test code should be something like:
1443
 
1444
```
1445
namespace my_namespace {
1446
class FooTest : public ::testing::Test {
1447
 protected:
1448
  ...
1449
};
1450
 
1451
TEST_F(FooTest, Bar) { ... }
1452
TEST_F(FooTest, Baz) { ... }
1453
 
1454
}  // namespace my_namespace
1455
```
1456
 
1457
# Catching Failures #
1458
 
1459
If you are building a testing utility on top of Google Test, you'll
1460
want to test your utility.  What framework would you use to test it?
1461
Google Test, of course.
1462
 
1463
The challenge is to verify that your testing utility reports failures
1464
correctly.  In frameworks that report a failure by throwing an
1465
exception, you could catch the exception and assert on it.  But Google
1466
Test doesn't use exceptions, so how do we test that a piece of code
1467
generates an expected failure?
1468
 
1469
`` contains some constructs to do this.  After
1470
`#include`ing this header, you can use
1471
 
1472
| `EXPECT_FATAL_FAILURE(`_statement, substring_`);` |
1473
|:--------------------------------------------------|
1474
 
1475
to assert that _statement_ generates a fatal (e.g. `ASSERT_*`) failure
1476
whose message contains the given _substring_, or use
1477
 
1478
| `EXPECT_NONFATAL_FAILURE(`_statement, substring_`);` |
1479
|:-----------------------------------------------------|
1480
 
1481
if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1482
 
1483
For technical reasons, there are some caveats:
1484
 
1485
  1. You cannot stream a failure message to either macro.
1486
  1. _statement_ in `EXPECT_FATAL_FAILURE()` cannot reference local non-static variables or non-static members of `this` object.
1487
  1. _statement_ in `EXPECT_FATAL_FAILURE()` cannot return a value.
1488
 
1489
_Note:_ Google Test is designed with threads in mind.  Once the
1490
synchronization primitives in `` have
1491
been implemented, Google Test will become thread-safe, meaning that
1492
you can then use assertions in multiple threads concurrently.  Before
1493
 
1494
that, however, Google Test only supports single-threaded usage.  Once
1495
thread-safe, `EXPECT_FATAL_FAILURE()` and `EXPECT_NONFATAL_FAILURE()`
1496
will capture failures in the current thread only. If _statement_
1497
creates new threads, failures in these threads will be ignored.  If
1498
you want to capture failures from all threads instead, you should use
1499
the following macros:
1500
 
1501
| `EXPECT_FATAL_FAILURE_ON_ALL_THREADS(`_statement, substring_`);` |
1502
|:-----------------------------------------------------------------|
1503
| `EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(`_statement, substring_`);` |
1504
 
1505
# Getting the Current Test's Name #
1506
 
1507
Sometimes a function may need to know the name of the currently running test.
1508
For example, you may be using the `SetUp()` method of your test fixture to set
1509
the golden file name based on which test is running. The `::testing::TestInfo`
1510
class has this information:
1511
 
1512
```
1513
namespace testing {
1514
 
1515
class TestInfo {
1516
 public:
1517
  // Returns the test case name and the test name, respectively.
1518
  //
1519
  // Do NOT delete or free the return value - it's managed by the
1520
  // TestInfo class.
1521
  const char* test_case_name() const;
1522
  const char* name() const;
1523
};
1524
 
1525
}  // namespace testing
1526
```
1527
 
1528
 
1529
> To obtain a `TestInfo` object for the currently running test, call
1530
`current_test_info()` on the `UnitTest` singleton object:
1531
 
1532
```
1533
// Gets information about the currently running test.
1534
// Do NOT delete the returned object - it's managed by the UnitTest class.
1535
const ::testing::TestInfo* const test_info =
1536
  ::testing::UnitTest::GetInstance()->current_test_info();
1537
printf("We are in test %s of test case %s.\n",
1538
       test_info->name(), test_info->test_case_name());
1539
```
1540
 
1541
`current_test_info()` returns a null pointer if no test is running. In
1542
particular, you cannot find the test case name in `TestCaseSetUp()`,
1543
`TestCaseTearDown()` (where you know the test case name implicitly), or
1544
functions called from them.
1545
 
1546
_Availability:_ Linux, Windows, Mac.
1547
 
1548
# Extending Google Test by Handling Test Events #
1549
 
1550
Google Test provides an event listener API to let you receive
1551
notifications about the progress of a test program and test
1552
failures. The events you can listen to include the start and end of
1553
the test program, a test case, or a test method, among others. You may
1554
use this API to augment or replace the standard console output,
1555
replace the XML output, or provide a completely different form of
1556
output, such as a GUI or a database. You can also use test events as
1557
checkpoints to implement a resource leak checker, for example.
1558
 
1559
_Availability:_ Linux, Windows, Mac; since v1.4.0.
1560
 
1561
## Defining Event Listeners ##
1562
 
1563
To define a event listener, you subclass either
1564
[testing::TestEventListener](../include/gtest/gtest.h#L855)
1565
or [testing::EmptyTestEventListener](../include/gtest/gtest.h#L905).
1566
The former is an (abstract) interface, where each pure virtual method
1567
can be overridden to handle a test event (For example, when a test
1568
starts, the `OnTestStart()` method will be called.). The latter provides
1569
an empty implementation of all methods in the interface, such that a
1570
subclass only needs to override the methods it cares about.
1571
 
1572
When an event is fired, its context is passed to the handler function
1573
as an argument. The following argument types are used:
1574
  * [UnitTest](../include/gtest/gtest.h#L1007) reflects the state of the entire test program,
1575
  * [TestCase](../include/gtest/gtest.h#L689) has information about a test case, which can contain one or more tests,
1576
  * [TestInfo](../include/gtest/gtest.h#L599) contains the state of a test, and
1577
  * [TestPartResult](../include/gtest/gtest-test-part.h#L42) represents the result of a test assertion.
1578
 
1579
An event handler function can examine the argument it receives to find
1580
out interesting information about the event and the test program's
1581
state.  Here's an example:
1582
 
1583
```
1584
  class MinimalistPrinter : public ::testing::EmptyTestEventListener {
1585
    // Called before a test starts.
1586
    virtual void OnTestStart(const ::testing::TestInfo& test_info) {
1587
      printf("*** Test %s.%s starting.\n",
1588
             test_info.test_case_name(), test_info.name());
1589
    }
1590
 
1591
    // Called after a failed assertion or a SUCCESS().
1592
    virtual void OnTestPartResult(
1593
        const ::testing::TestPartResult& test_part_result) {
1594
      printf("%s in %s:%d\n%s\n",
1595
             test_part_result.failed() ? "*** Failure" : "Success",
1596
             test_part_result.file_name(),
1597
             test_part_result.line_number(),
1598
             test_part_result.summary());
1599
    }
1600
 
1601
    // Called after a test ends.
1602
    virtual void OnTestEnd(const ::testing::TestInfo& test_info) {
1603
      printf("*** Test %s.%s ending.\n",
1604
             test_info.test_case_name(), test_info.name());
1605
    }
1606
  };
1607
```
1608
 
1609
## Using Event Listeners ##
1610
 
1611
To use the event listener you have defined, add an instance of it to
1612
the Google Test event listener list (represented by class
1613
[TestEventListeners](../include/gtest/gtest.h#L929)
1614
- note the "s" at the end of the name) in your
1615
`main()` function, before calling `RUN_ALL_TESTS()`:
1616
```
1617
int main(int argc, char** argv) {
1618
  ::testing::InitGoogleTest(&argc, argv);
1619
  // Gets hold of the event listener list.
1620
  ::testing::TestEventListeners& listeners =
1621
      ::testing::UnitTest::GetInstance()->listeners();
1622
  // Adds a listener to the end.  Google Test takes the ownership.
1623
  listeners.Append(new MinimalistPrinter);
1624
  return RUN_ALL_TESTS();
1625
}
1626
```
1627
 
1628
There's only one problem: the default test result printer is still in
1629
effect, so its output will mingle with the output from your minimalist
1630
printer. To suppress the default printer, just release it from the
1631
event listener list and delete it. You can do so by adding one line:
1632
```
1633
  ...
1634
  delete listeners.Release(listeners.default_result_printer());
1635
  listeners.Append(new MinimalistPrinter);
1636
  return RUN_ALL_TESTS();
1637
```
1638
 
1639
Now, sit back and enjoy a completely different output from your
1640
tests. For more details, you can read this
1641
[sample](../samples/sample9_unittest.cc).
1642
 
1643
You may append more than one listener to the list. When an `On*Start()`
1644
or `OnTestPartResult()` event is fired, the listeners will receive it in
1645
the order they appear in the list (since new listeners are added to
1646
the end of the list, the default text printer and the default XML
1647
generator will receive the event first). An `On*End()` event will be
1648
received by the listeners in the _reverse_ order. This allows output by
1649
listeners added later to be framed by output from listeners added
1650
earlier.
1651
 
1652
## Generating Failures in Listeners ##
1653
 
1654
You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`,
1655
`FAIL()`, etc) when processing an event. There are some restrictions:
1656
 
1657
  1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will cause `OnTestPartResult()` to be called recursively).
1658
  1. A listener that handles `OnTestPartResult()` is not allowed to generate any failure.
1659
 
1660
When you add listeners to the listener list, you should put listeners
1661
that handle `OnTestPartResult()` _before_ listeners that can generate
1662
failures. This ensures that failures generated by the latter are
1663
attributed to the right test by the former.
1664
 
1665
We have a sample of failure-raising listener
1666
[here](../samples/sample10_unittest.cc).
1667
 
1668
# Running Test Programs: Advanced Options #
1669
 
1670
Google Test test programs are ordinary executables. Once built, you can run
1671
them directly and affect their behavior via the following environment variables
1672
and/or command line flags. For the flags to work, your programs must call
1673
`::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
1674
 
1675
To see a list of supported flags and their usage, please run your test
1676
program with the `--help` flag.  You can also use `-h`, `-?`, or `/?`
1677
for short.  This feature is added in version 1.3.0.
1678
 
1679
If an option is specified both by an environment variable and by a
1680
flag, the latter takes precedence.  Most of the options can also be
1681
set/read in code: to access the value of command line flag
1682
`--gtest_foo`, write `::testing::GTEST_FLAG(foo)`.  A common pattern is
1683
to set the value of a flag before calling `::testing::InitGoogleTest()`
1684
to change the default value of the flag:
1685
```
1686
int main(int argc, char** argv) {
1687
  // Disables elapsed time by default.
1688
  ::testing::GTEST_FLAG(print_time) = false;
1689
 
1690
  // This allows the user to override the flag on the command line.
1691
  ::testing::InitGoogleTest(&argc, argv);
1692
 
1693
  return RUN_ALL_TESTS();
1694
}
1695
```
1696
 
1697
## Selecting Tests ##
1698
 
1699
This section shows various options for choosing which tests to run.
1700
 
1701
### Listing Test Names ###
1702
 
1703
Sometimes it is necessary to list the available tests in a program before
1704
running them so that a filter may be applied if needed. Including the flag
1705
`--gtest_list_tests` overrides all other flags and lists tests in the following
1706
format:
1707
```
1708
TestCase1.
1709
  TestName1
1710
  TestName2
1711
TestCase2.
1712
  TestName
1713
```
1714
 
1715
None of the tests listed are actually run if the flag is provided. There is no
1716
corresponding environment variable for this flag.
1717
 
1718
_Availability:_ Linux, Windows, Mac.
1719
 
1720
### Running a Subset of the Tests ###
1721
 
1722
By default, a Google Test program runs all tests the user has defined.
1723
Sometimes, you want to run only a subset of the tests (e.g. for debugging or
1724
quickly verifying a change). If you set the `GTEST_FILTER` environment variable
1725
or the `--gtest_filter` flag to a filter string, Google Test will only run the
1726
tests whose full names (in the form of `TestCaseName.TestName`) match the
1727
filter.
1728
 
1729
The format of a filter is a '`:`'-separated list of wildcard patterns (called
1730
the positive patterns) optionally followed by a '`-`' and another
1731
'`:`'-separated pattern list (called the negative patterns). A test matches the
1732
filter if and only if it matches any of the positive patterns but does not
1733
match any of the negative patterns.
1734
 
1735
A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
1736
character). For convenience, the filter `'*-NegativePatterns'` can be also
1737
written as `'-NegativePatterns'`.
1738
 
1739
For example:
1740
 
1741
  * `./foo_test` Has no flag, and thus runs all its tests.
1742
  * `./foo_test --gtest_filter=*` Also runs everything, due to the single match-everything `*` value.
1743
  * `./foo_test --gtest_filter=FooTest.*` Runs everything in test case `FooTest`.
1744
  * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full name contains either `"Null"` or `"Constructor"`.
1745
  * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
1746
  * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test case `FooTest` except `FooTest.Bar`.
1747
 
1748
_Availability:_ Linux, Windows, Mac.
1749
 
1750
### Temporarily Disabling Tests ###
1751
 
1752
If you have a broken test that you cannot fix right away, you can add the
1753
`DISABLED_` prefix to its name. This will exclude it from execution. This is
1754
better than commenting out the code or using `#if 0`, as disabled tests are
1755
still compiled (and thus won't rot).
1756
 
1757
If you need to disable all tests in a test case, you can either add `DISABLED_`
1758
to the front of the name of each test, or alternatively add it to the front of
1759
the test case name.
1760
 
1761
For example, the following tests won't be run by Google Test, even though they
1762
will still be compiled:
1763
 
1764
```
1765
// Tests that Foo does Abc.
1766
TEST(FooTest, DISABLED_DoesAbc) { ... }
1767
 
1768
class DISABLED_BarTest : public ::testing::Test { ... };
1769
 
1770
// Tests that Bar does Xyz.
1771
TEST_F(DISABLED_BarTest, DoesXyz) { ... }
1772
```
1773
 
1774
_Note:_ This feature should only be used for temporary pain-relief. You still
1775
have to fix the disabled tests at a later date. As a reminder, Google Test will
1776
print a banner warning you if a test program contains any disabled tests.
1777
 
1778
_Tip:_ You can easily count the number of disabled tests you have
1779
using `grep`. This number can be used as a metric for improving your
1780
test quality.
1781
 
1782
_Availability:_ Linux, Windows, Mac.
1783
 
1784
### Temporarily Enabling Disabled Tests ###
1785
 
1786
To include [disabled tests](#temporarily-disabling-tests) in test
1787
execution, just invoke the test program with the
1788
`--gtest_also_run_disabled_tests` flag or set the
1789
`GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other
1790
than `0`.  You can combine this with the
1791
[--gtest\_filter](#running-a-subset-of-the-tests) flag to further select
1792
which disabled tests to run.
1793
 
1794
_Availability:_ Linux, Windows, Mac; since version 1.3.0.
1795
 
1796
## Repeating the Tests ##
1797
 
1798
Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
1799
will fail only 1% of the time, making it rather hard to reproduce the bug under
1800
a debugger. This can be a major source of frustration.
1801
 
1802
The `--gtest_repeat` flag allows you to repeat all (or selected) test methods
1803
in a program many times. Hopefully, a flaky test will eventually fail and give
1804
you a chance to debug. Here's how to use it:
1805
 
1806
| `$ foo_test --gtest_repeat=1000` | Repeat foo\_test 1000 times and don't stop at failures. |
1807
|:---------------------------------|:--------------------------------------------------------|
1808
| `$ foo_test --gtest_repeat=-1`   | A negative count means repeating forever.               |
1809
| `$ foo_test --gtest_repeat=1000 --gtest_break_on_failure` | Repeat foo\_test 1000 times, stopping at the first failure. This is especially useful when running under a debugger: when the testfails, it will drop into the debugger and you can then inspect variables and stacks. |
1810
| `$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar` | Repeat the tests whose name matches the filter 1000 times. |
1811
 
1812
If your test program contains global set-up/tear-down code registered
1813
using `AddGlobalTestEnvironment()`, it will be repeated in each
1814
iteration as well, as the flakiness may be in it. You can also specify
1815
the repeat count by setting the `GTEST_REPEAT` environment variable.
1816
 
1817
_Availability:_ Linux, Windows, Mac.
1818
 
1819
## Shuffling the Tests ##
1820
 
1821
You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
1822
environment variable to `1`) to run the tests in a program in a random
1823
order. This helps to reveal bad dependencies between tests.
1824
 
1825
By default, Google Test uses a random seed calculated from the current
1826
time. Therefore you'll get a different order every time. The console
1827
output includes the random seed value, such that you can reproduce an
1828
order-related test failure later. To specify the random seed
1829
explicitly, use the `--gtest_random_seed=SEED` flag (or set the
1830
`GTEST_RANDOM_SEED` environment variable), where `SEED` is an integer
1831
between 0 and 99999. The seed value 0 is special: it tells Google Test
1832
to do the default behavior of calculating the seed from the current
1833
time.
1834
 
1835
If you combine this with `--gtest_repeat=N`, Google Test will pick a
1836
different random seed and re-shuffle the tests in each iteration.
1837
 
1838
_Availability:_ Linux, Windows, Mac; since v1.4.0.
1839
 
1840
## Controlling Test Output ##
1841
 
1842
This section teaches how to tweak the way test results are reported.
1843
 
1844
### Colored Terminal Output ###
1845
 
1846
Google Test can use colors in its terminal output to make it easier to spot
1847
the separation between tests, and whether tests passed.
1848
 
1849
You can set the GTEST\_COLOR environment variable or set the `--gtest_color`
1850
command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
1851
disable colors, or let Google Test decide. When the value is `auto`, Google
1852
Test will use colors if and only if the output goes to a terminal and (on
1853
non-Windows platforms) the `TERM` environment variable is set to `xterm` or
1854
`xterm-color`.
1855
 
1856
_Availability:_ Linux, Windows, Mac.
1857
 
1858
### Suppressing the Elapsed Time ###
1859
 
1860
By default, Google Test prints the time it takes to run each test.  To
1861
suppress that, run the test program with the `--gtest_print_time=0`
1862
command line flag.  Setting the `GTEST_PRINT_TIME` environment
1863
variable to `0` has the same effect.
1864
 
1865
_Availability:_ Linux, Windows, Mac.  (In Google Test 1.3.0 and lower,
1866
the default behavior is that the elapsed time is **not** printed.)
1867
 
1868
### Generating an XML Report ###
1869
 
1870
Google Test can emit a detailed XML report to a file in addition to its normal
1871
textual output. The report contains the duration of each test, and thus can
1872
help you identify slow tests.
1873
 
1874
To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
1875
`--gtest_output` flag to the string `"xml:_path_to_output_file_"`, which will
1876
create the file at the given location. You can also just use the string
1877
`"xml"`, in which case the output can be found in the `test_detail.xml` file in
1878
the current directory.
1879
 
1880
If you specify a directory (for example, `"xml:output/directory/"` on Linux or
1881
`"xml:output\directory\"` on Windows), Google Test will create the XML file in
1882
that directory, named after the test executable (e.g. `foo_test.xml` for test
1883
program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
1884
over from a previous run), Google Test will pick a different name (e.g.
1885
`foo_test_1.xml`) to avoid overwriting it.
1886
 
1887
The report uses the format described here.  It is based on the
1888
`junitreport` Ant task and can be parsed by popular continuous build
1889
systems like [Hudson](https://hudson.dev.java.net/). Since that format
1890
was originally intended for Java, a little interpretation is required
1891
to make it apply to Google Test tests, as shown here:
1892
 
1893
```
1894
1895
  
1896
    
1897
      
1898
      
1899
      
1900
    
1901
  
1902
1903
```
1904
 
1905
  * The root `` element corresponds to the entire test program.
1906
  * `` elements correspond to Google Test test cases.
1907
  * `` elements correspond to Google Test test functions.
1908
 
1909
For instance, the following program
1910
 
1911
```
1912
TEST(MathTest, Addition) { ... }
1913
TEST(MathTest, Subtraction) { ... }
1914
TEST(LogicTest, NonContradiction) { ... }
1915
```
1916
 
1917
could generate this report:
1918
 
1919
```
1920
1921
1922
  
1923
    
1924
      
1925
      
1926
    
1927
    
1928
    
1929
  
1930
  
1931
    
1932
    
1933
  
1934
1935
```
1936
 
1937
Things to note:
1938
 
1939
  * The `tests` attribute of a `` or `` element tells how many test functions the Google Test program or test case contains, while the `failures` attribute tells how many of them failed.
1940
  * The `time` attribute expresses the duration of the test, test case, or entire test program in milliseconds.
1941
  * Each `` element corresponds to a single failed Google Test assertion.
1942
  * Some JUnit concepts don't apply to Google Test, yet we have to conform to the DTD. Therefore you'll see some dummy elements and attributes in the report. You can safely ignore these parts.
1943
 
1944
_Availability:_ Linux, Windows, Mac.
1945
 
1946
## Controlling How Failures Are Reported ##
1947
 
1948
### Turning Assertion Failures into Break-Points ###
1949
 
1950
When running test programs under a debugger, it's very convenient if the
1951
debugger can catch an assertion failure and automatically drop into interactive
1952
mode. Google Test's _break-on-failure_ mode supports this behavior.
1953
 
1954
To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
1955
other than `0` . Alternatively, you can use the `--gtest_break_on_failure`
1956
command line flag.
1957
 
1958
_Availability:_ Linux, Windows, Mac.
1959
 
1960
### Suppressing Pop-ups Caused by Exceptions ###
1961
 
1962
On Windows, Google Test may be used with exceptions enabled. Even when
1963
exceptions are disabled, an application can still throw structured exceptions
1964
(SEH's). If a test throws an exception, by default Google Test doesn't try to
1965
catch it. Instead, you'll see a pop-up dialog, at which point you can attach
1966
the process to a debugger and easily find out what went wrong.
1967
 
1968
However, if you don't want to see the pop-ups (for example, if you run the
1969
tests in a batch job), set the `GTEST_CATCH_EXCEPTIONS` environment variable to
1970
a non- `0` value, or use the `--gtest_catch_exceptions` flag. Google Test now
1971
catches all test-thrown exceptions and logs them as failures.
1972
 
1973
_Availability:_ Windows. `GTEST_CATCH_EXCEPTIONS` and
1974
`--gtest_catch_exceptions` have no effect on Google Test's behavior on Linux or
1975
Mac, even if exceptions are enabled. It is possible to add support for catching
1976
exceptions on these platforms, but it is not implemented yet.
1977
 
1978
### Letting Another Testing Framework Drive ###
1979
 
1980
If you work on a project that has already been using another testing
1981
framework and is not ready to completely switch to Google Test yet,
1982
you can get much of Google Test's benefit by using its assertions in
1983
your existing tests.  Just change your `main()` function to look
1984
like:
1985
 
1986
```
1987
#include 
1988
 
1989
int main(int argc, char** argv) {
1990
  ::testing::GTEST_FLAG(throw_on_failure) = true;
1991
  // Important: Google Test must be initialized.
1992
  ::testing::InitGoogleTest(&argc, argv);
1993
 
1994
  ... whatever your existing testing framework requires ...
1995
}
1996
```
1997
 
1998
With that, you can use Google Test assertions in addition to the
1999
native assertions your testing framework provides, for example:
2000
 
2001
```
2002
void TestFooDoesBar() {
2003
  Foo foo;
2004
  EXPECT_LE(foo.Bar(1), 100);     // A Google Test assertion.
2005
  CPPUNIT_ASSERT(foo.IsEmpty());  // A native assertion.
2006
}
2007
```
2008
 
2009
If a Google Test assertion fails, it will print an error message and
2010
throw an exception, which will be treated as a failure by your host
2011
testing framework.  If you compile your code with exceptions disabled,
2012
a failed Google Test assertion will instead exit your program with a
2013
non-zero code, which will also signal a test failure to your test
2014
runner.
2015
 
2016
If you don't write `::testing::GTEST_FLAG(throw_on_failure) = true;` in
2017
your `main()`, you can alternatively enable this feature by specifying
2018
the `--gtest_throw_on_failure` flag on the command-line or setting the
2019
`GTEST_THROW_ON_FAILURE` environment variable to a non-zero value.
2020
 
2021
_Availability:_ Linux, Windows, Mac; since v1.3.0.
2022
 
2023
## Distributing Test Functions to Multiple Machines ##
2024
 
2025
If you have more than one machine you can use to run a test program,
2026
you might want to run the test functions in parallel and get the
2027
result faster.  We call this technique _sharding_, where each machine
2028
is called a _shard_.
2029
 
2030
Google Test is compatible with test sharding.  To take advantage of
2031
this feature, your test runner (not part of Google Test) needs to do
2032
the following:
2033
 
2034
  1. Allocate a number of machines (shards) to run the tests.
2035
  1. On each shard, set the `GTEST_TOTAL_SHARDS` environment variable to the total number of shards.  It must be the same for all shards.
2036
  1. On each shard, set the `GTEST_SHARD_INDEX` environment variable to the index of the shard.  Different shards must be assigned different indices, which must be in the range `[0, GTEST_TOTAL_SHARDS - 1]`.
2037
  1. Run the same test program on all shards.  When Google Test sees the above two environment variables, it will select a subset of the test functions to run.  Across all shards, each test function in the program will be run exactly once.
2038
  1. Wait for all shards to finish, then collect and report the results.
2039
 
2040
Your project may have tests that were written without Google Test and
2041
thus don't understand this protocol.  In order for your test runner to
2042
figure out which test supports sharding, it can set the environment
2043
variable `GTEST_SHARD_STATUS_FILE` to a non-existent file path.  If a
2044
test program supports sharding, it will create this file to
2045
acknowledge the fact (the actual contents of the file are not
2046
important at this time; although we may stick some useful information
2047
in it in the future.); otherwise it will not create it.
2048
 
2049
Here's an example to make it clear.  Suppose you have a test program
2050
`foo_test` that contains the following 5 test functions:
2051
```
2052
TEST(A, V)
2053
TEST(A, W)
2054
TEST(B, X)
2055
TEST(B, Y)
2056
TEST(B, Z)
2057
```
2058
and you have 3 machines at your disposal.  To run the test functions in
2059
parallel, you would set `GTEST_TOTAL_SHARDS` to 3 on all machines, and
2060
set `GTEST_SHARD_INDEX` to 0, 1, and 2 on the machines respectively.
2061
Then you would run the same `foo_test` on each machine.
2062
 
2063
Google Test reserves the right to change how the work is distributed
2064
across the shards, but here's one possible scenario:
2065
 
2066
  * Machine #0 runs `A.V` and `B.X`.
2067
  * Machine #1 runs `A.W` and `B.Y`.
2068
  * Machine #2 runs `B.Z`.
2069
 
2070
_Availability:_ Linux, Windows, Mac; since version 1.3.0.
2071
 
2072
# Fusing Google Test Source Files #
2073
 
2074
Google Test's implementation consists of ~30 files (excluding its own
2075
tests).  Sometimes you may want them to be packaged up in two files (a
2076
`.h` and a `.cc`) instead, such that you can easily copy them to a new
2077
machine and start hacking there.  For this we provide an experimental
2078
Python script `fuse_gtest_files.py` in the `scripts/` directory (since release 1.3.0).
2079
Assuming you have Python 2.4 or above installed on your machine, just
2080
go to that directory and run
2081
```
2082
python fuse_gtest_files.py OUTPUT_DIR
2083
```
2084
 
2085
and you should see an `OUTPUT_DIR` directory being created with files
2086
`gtest/gtest.h` and `gtest/gtest-all.cc` in it.  These files contain
2087
everything you need to use Google Test.  Just copy them to anywhere
2088
you want and you are ready to write tests.  You can use the
2089
[scrpts/test/Makefile](../scripts/test/Makefile)
2090
file as an example on how to compile your tests against them.
2091
 
2092
# Where to Go from Here #
2093
 
2094
Congratulations! You've now learned more advanced Google Test tools and are
2095
ready to tackle more complex testing tasks. If you want to dive even deeper, you
2096
can read the [FAQ](V1_5_FAQ.md).

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.