2015-09-29 17:35:39 -07:00
|
|
|
'use strict';
|
|
|
|
|
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
const {getTestFlags} = require('./TestFlags');
|
2024-04-03 16:02:04 -04:00
|
|
|
const {
|
2025-01-07 14:17:14 -05:00
|
|
|
assertConsoleLogsCleared,
|
2024-04-03 16:02:04 -04:00
|
|
|
resetAllUnexpectedConsoleCalls,
|
|
|
|
|
patchConsoleMethods,
|
|
|
|
|
} = require('internal-test-utils/consoleMock');
|
2025-12-15 11:45:17 +01:00
|
|
|
const path = require('path');
|
2018-01-02 11:06:41 -08:00
|
|
|
|
2017-07-19 10:35:45 -07:00
|
|
|
if (process.env.REACT_CLASS_EQUIVALENCE_TEST) {
|
|
|
|
|
// Inside the class equivalence tester, we have a custom environment, let's
|
|
|
|
|
// require that instead.
|
2017-11-23 17:44:58 +00:00
|
|
|
require('./spec-equivalence-reporter/setupTests.js');
|
2017-07-19 10:35:45 -07:00
|
|
|
} else {
|
2017-12-11 23:52:46 +08:00
|
|
|
const errorMap = require('../error-codes/codes.json');
|
2017-01-13 14:45:56 -08:00
|
|
|
|
2023-02-11 02:39:14 +08:00
|
|
|
// By default, jest.spyOn also calls the spied method.
|
|
|
|
|
const spyOn = jest.spyOn;
|
|
|
|
|
const noop = jest.fn;
|
2017-11-28 14:06:26 -08:00
|
|
|
|
2025-12-15 11:45:17 +01:00
|
|
|
// Can be used to normalize paths in stackframes
|
|
|
|
|
global.__REACT_ROOT_PATH_TEST__ = path.resolve(__dirname, '../..');
|
|
|
|
|
|
2017-11-28 14:06:26 -08:00
|
|
|
// Spying on console methods in production builds can mask errors.
|
|
|
|
|
// This is why we added an explicit spyOnDev() helper.
|
|
|
|
|
// It's too easy to accidentally use the more familiar spyOn() helper though,
|
|
|
|
|
// So we disable it entirely.
|
|
|
|
|
// Spying on both dev and prod will require using both spyOnDev() and spyOnProd().
|
2023-01-31 08:25:05 -05:00
|
|
|
global.spyOn = function () {
|
2017-11-28 14:06:26 -08:00
|
|
|
throw new Error(
|
|
|
|
|
'Do not use spyOn(). ' +
|
|
|
|
|
'It can accidentally hide unexpected errors in production builds. ' +
|
|
|
|
|
'Use spyOnDev(), spyOnProd(), or spyOnDevAndProd() instead.'
|
|
|
|
|
);
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
if (process.env.NODE_ENV === 'production') {
|
|
|
|
|
global.spyOnDev = noop;
|
|
|
|
|
global.spyOnProd = spyOn;
|
|
|
|
|
global.spyOnDevAndProd = spyOn;
|
|
|
|
|
} else {
|
|
|
|
|
global.spyOnDev = spyOn;
|
|
|
|
|
global.spyOnProd = noop;
|
|
|
|
|
global.spyOnDevAndProd = spyOn;
|
|
|
|
|
}
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
|
2018-01-02 11:06:41 -08:00
|
|
|
expect.extend({
|
2019-02-25 19:01:45 -08:00
|
|
|
...require('./matchers/reactTestMatchers'),
|
2021-10-11 18:40:42 -04:00
|
|
|
...require('./matchers/toThrow'),
|
2018-01-02 11:06:41 -08:00
|
|
|
});
|
|
|
|
|
|
2017-12-07 20:53:13 +00:00
|
|
|
// We have a Babel transform that inserts guards against infinite loops.
|
|
|
|
|
// If a loop runs for too many iterations, we throw an error and set this
|
|
|
|
|
// global variable. The global lets us detect an infinite loop even if
|
|
|
|
|
// the actual error object ends up being caught and ignored. An infinite
|
|
|
|
|
// loop must always fail the test!
|
2023-02-11 02:39:14 +08:00
|
|
|
beforeEach(() => {
|
2017-12-07 20:53:13 +00:00
|
|
|
global.infiniteLoopError = null;
|
|
|
|
|
});
|
2023-02-11 02:39:14 +08:00
|
|
|
afterEach(() => {
|
2017-12-07 20:53:13 +00:00
|
|
|
const error = global.infiniteLoopError;
|
|
|
|
|
global.infiniteLoopError = null;
|
|
|
|
|
if (error) {
|
|
|
|
|
throw error;
|
|
|
|
|
}
|
|
|
|
|
});
|
|
|
|
|
|
2024-04-03 16:02:04 -04:00
|
|
|
// Patch the console to assert that all console error/warn/log calls assert.
|
|
|
|
|
patchConsoleMethods({includeLog: !!process.env.CI});
|
2023-02-11 02:39:14 +08:00
|
|
|
beforeEach(resetAllUnexpectedConsoleCalls);
|
2025-01-07 14:17:14 -05:00
|
|
|
afterEach(assertConsoleLogsCleared);
|
|
|
|
|
|
|
|
|
|
// TODO: enable this check so we don't forget to reset spyOnX mocks.
|
|
|
|
|
// afterEach(() => {
|
|
|
|
|
// if (
|
|
|
|
|
// console[methodName] !== mockMethod &&
|
|
|
|
|
// !jest.isMockFunction(console[methodName])
|
|
|
|
|
// ) {
|
|
|
|
|
// throw new Error(
|
|
|
|
|
// `Test did not tear down console.${methodName} mock properly.`
|
|
|
|
|
// );
|
|
|
|
|
// }
|
|
|
|
|
// });
|
2016-11-13 14:00:07 -08:00
|
|
|
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
if (process.env.NODE_ENV === 'production') {
|
|
|
|
|
// In production, we strip error messages and turn them into codes.
|
|
|
|
|
// This decodes them back so that the test assertions on them work.
|
2019-06-18 19:38:33 +01:00
|
|
|
// 1. `ErrorProxy` decodes error messages at Error construction time and
|
|
|
|
|
// also proxies error instances with `proxyErrorInstance`.
|
|
|
|
|
// 2. `proxyErrorInstance` decodes error messages when the `message`
|
|
|
|
|
// property is changed.
|
2023-01-31 08:25:05 -05:00
|
|
|
const decodeErrorMessage = function (message) {
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
if (!message) {
|
|
|
|
|
return message;
|
|
|
|
|
}
|
2024-01-17 21:41:07 -05:00
|
|
|
const re = /react.dev\/errors\/(\d+)?\??([^\s]*)/;
|
2024-01-24 11:27:30 -05:00
|
|
|
let matches = message.match(re);
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
if (!matches || matches.length !== 3) {
|
2024-01-24 11:27:30 -05:00
|
|
|
// Some tests use React 17, when the URL was different.
|
|
|
|
|
const re17 = /error-decoder.html\?invariant=(\d+)([^\s]*)/;
|
|
|
|
|
matches = message.match(re17);
|
|
|
|
|
if (!matches || matches.length !== 3) {
|
|
|
|
|
return message;
|
|
|
|
|
}
|
2017-07-19 10:35:45 -07:00
|
|
|
}
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
const code = parseInt(matches[1], 10);
|
|
|
|
|
const args = matches[2]
|
|
|
|
|
.split('&')
|
|
|
|
|
.filter(s => s.startsWith('args[]='))
|
2023-04-19 14:26:01 -07:00
|
|
|
.map(s => s.slice('args[]='.length))
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
.map(decodeURIComponent);
|
|
|
|
|
const format = errorMap[code];
|
|
|
|
|
let argIndex = 0;
|
|
|
|
|
return format.replace(/%s/g, () => args[argIndex++]);
|
2017-07-19 10:35:45 -07:00
|
|
|
};
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
const OriginalError = global.Error;
|
2019-06-18 19:38:33 +01:00
|
|
|
// V8's Error.captureStackTrace (used in Jest) fails if the error object is
|
|
|
|
|
// a Proxy, so we need to pass it the unproxied instance.
|
|
|
|
|
const originalErrorInstances = new WeakMap();
|
2023-01-31 08:25:05 -05:00
|
|
|
const captureStackTrace = function (error, ...args) {
|
2019-06-18 19:38:33 +01:00
|
|
|
return OriginalError.captureStackTrace.call(
|
|
|
|
|
this,
|
|
|
|
|
originalErrorInstances.get(error) ||
|
|
|
|
|
// Sometimes this wrapper receives an already-unproxied instance.
|
|
|
|
|
error,
|
|
|
|
|
...args
|
|
|
|
|
);
|
|
|
|
|
};
|
|
|
|
|
const proxyErrorInstance = error => {
|
|
|
|
|
const proxy = new Proxy(error, {
|
|
|
|
|
set(target, key, value, receiver) {
|
|
|
|
|
if (key === 'message') {
|
|
|
|
|
return Reflect.set(
|
|
|
|
|
target,
|
|
|
|
|
key,
|
|
|
|
|
decodeErrorMessage(value),
|
|
|
|
|
receiver
|
|
|
|
|
);
|
|
|
|
|
}
|
|
|
|
|
return Reflect.set(target, key, value, receiver);
|
|
|
|
|
},
|
2025-11-29 16:52:39 +01:00
|
|
|
get(target, key, receiver) {
|
|
|
|
|
if (key === 'stack') {
|
|
|
|
|
// https://github.com/nodejs/node/issues/60862
|
|
|
|
|
return Reflect.get(target, key);
|
|
|
|
|
}
|
|
|
|
|
return Reflect.get(target, key, receiver);
|
|
|
|
|
},
|
2019-06-18 19:38:33 +01:00
|
|
|
});
|
|
|
|
|
originalErrorInstances.set(proxy, error);
|
|
|
|
|
return proxy;
|
|
|
|
|
};
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
const ErrorProxy = new Proxy(OriginalError, {
|
2017-11-23 17:44:58 +00:00
|
|
|
apply(target, thisArg, argumentsList) {
|
|
|
|
|
const error = Reflect.apply(target, thisArg, argumentsList);
|
|
|
|
|
error.message = decodeErrorMessage(error.message);
|
2019-06-18 19:38:33 +01:00
|
|
|
return proxyErrorInstance(error);
|
2017-11-23 17:44:58 +00:00
|
|
|
},
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
construct(target, argumentsList, newTarget) {
|
|
|
|
|
const error = Reflect.construct(target, argumentsList, newTarget);
|
|
|
|
|
error.message = decodeErrorMessage(error.message);
|
2019-06-18 19:38:33 +01:00
|
|
|
return proxyErrorInstance(error);
|
|
|
|
|
},
|
|
|
|
|
get(target, key, receiver) {
|
|
|
|
|
if (key === 'captureStackTrace') {
|
|
|
|
|
return captureStackTrace;
|
|
|
|
|
}
|
|
|
|
|
return Reflect.get(target, key, receiver);
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
ErrorProxy.OriginalError = OriginalError;
|
|
|
|
|
global.Error = ErrorProxy;
|
|
|
|
|
}
|
Deterministic updates (#10715)
* Deterministic updates
High priority updates typically require less work to render than
low priority ones. It's beneficial to flush those first, in their own
batch, before working on more expensive low priority ones. We do this
even if a high priority is scheduled after a low priority one.
However, we don't want this reordering of updates to affect the terminal
state. State should be deterministic: once all work has been flushed,
the final state should be the same regardless of how they were
scheduled.
To get both properties, we store updates on the queue in insertion
order instead of priority order (always append). Then, when processing
the queue, we skip over updates with insufficient priority. Instead of
removing updates from the queue right after processing them, we only
remove them if there are no unprocessed updates before it in the list.
This means that updates may be processed more than once.
As a bonus, the new implementation is simpler and requires less code.
* Fix ceiling function
Mixed up the operators.
* Remove addUpdate, addReplaceState, et al
These functions don't really do anything. Simpler to use a single
insertUpdateIntoFiber function.
Also splits scheduleUpdate into two functions:
- scheduleWork traverses a fiber's ancestor path and updates their
expiration times.
- scheduleUpdate inserts an update into a fiber's update queue, then
calls scheduleWork.
* Remove getExpirationTime
The last remaining use for getExpirationTime was for top-level async
updates. I moved that check to scheduleUpdate instead.
* Move UpdateQueue insertions back to class module
Moves UpdateQueue related functions out of the scheduler and back into
the class component module. It's a bit awkward that now we need to pass
around createUpdateExpirationForFiber, too. But we can still do without
addUpdate, replaceUpdate, et al.
* Store callbacks as an array of Updates
Simpler this way.
Also moves commitCallbacks back to UpdateQueue module.
* beginUpdateQueue -> processUpdateQueue
* Updates should never have an expiration of NoWork
* Rename expiration related functions
* Fix update queue Flow types
Gets rid of an unneccessary null check
2017-10-13 17:21:25 -07:00
|
|
|
|
2024-03-19 21:08:37 -04:00
|
|
|
const expectTestToFail = async (callback, errorToThrowIfTestSucceeds) => {
|
2019-10-19 16:08:08 -07:00
|
|
|
if (callback.length > 0) {
|
|
|
|
|
throw Error(
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
'Gated test helpers do not support the `done` callback. Return a ' +
|
2019-10-19 16:08:08 -07:00
|
|
|
'promise instead.'
|
|
|
|
|
);
|
|
|
|
|
}
|
2024-03-19 21:08:37 -04:00
|
|
|
|
|
|
|
|
// Install a global error event handler. We treat global error events as
|
|
|
|
|
// test failures, same as Jest's default behavior.
|
|
|
|
|
//
|
|
|
|
|
// Becaused we installed our own error event handler, Jest will not report a
|
|
|
|
|
// test failure. Conceptually it's as if we wrapped the entire test event in
|
|
|
|
|
// a try-catch.
|
|
|
|
|
let didError = false;
|
|
|
|
|
const errorEventHandler = () => {
|
|
|
|
|
didError = true;
|
|
|
|
|
};
|
|
|
|
|
// eslint-disable-next-line no-restricted-globals
|
|
|
|
|
if (typeof addEventListener === 'function') {
|
|
|
|
|
// eslint-disable-next-line no-restricted-globals
|
|
|
|
|
addEventListener('error', errorEventHandler);
|
|
|
|
|
}
|
|
|
|
|
|
2019-10-19 16:08:08 -07:00
|
|
|
try {
|
|
|
|
|
const maybePromise = callback();
|
|
|
|
|
if (
|
|
|
|
|
maybePromise !== undefined &&
|
|
|
|
|
maybePromise !== null &&
|
|
|
|
|
typeof maybePromise.then === 'function'
|
|
|
|
|
) {
|
|
|
|
|
await maybePromise;
|
|
|
|
|
}
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
// Flush unexpected console calls inside the test itself, instead of in
|
|
|
|
|
// `afterEach` like we normally do. `afterEach` is too late because if it
|
|
|
|
|
// throws, we won't have captured it.
|
2025-01-07 14:17:14 -05:00
|
|
|
assertConsoleLogsCleared();
|
2024-02-18 18:17:02 +01:00
|
|
|
} catch (testError) {
|
2024-03-19 21:08:37 -04:00
|
|
|
didError = true;
|
|
|
|
|
}
|
|
|
|
|
resetAllUnexpectedConsoleCalls();
|
|
|
|
|
// eslint-disable-next-line no-restricted-globals
|
|
|
|
|
if (typeof removeEventListener === 'function') {
|
|
|
|
|
// eslint-disable-next-line no-restricted-globals
|
|
|
|
|
removeEventListener('error', errorEventHandler);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (!didError) {
|
|
|
|
|
// The test did not error like we expected it to. Report this to Jest as
|
|
|
|
|
// a failure.
|
|
|
|
|
throw errorToThrowIfTestSucceeds;
|
2019-10-19 16:08:08 -07:00
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
2024-08-20 16:40:01 -04:00
|
|
|
const coerceGateConditionToFunction = gateFnOrString => {
|
|
|
|
|
return typeof gateFnOrString === 'string'
|
|
|
|
|
? // `gate('foo')` is treated as equivalent to `gate(flags => flags.foo)`
|
|
|
|
|
flags => flags[gateFnOrString]
|
|
|
|
|
: // Assume this is already a function
|
|
|
|
|
gateFnOrString;
|
|
|
|
|
};
|
|
|
|
|
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
const gatedErrorMessage = 'Gated test was expected to fail, but it passed.';
|
2024-08-20 16:40:01 -04:00
|
|
|
global._test_gate = (gateFnOrString, testName, callback, timeoutMS) => {
|
|
|
|
|
const gateFn = coerceGateConditionToFunction(gateFnOrString);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
let shouldPass;
|
|
|
|
|
try {
|
|
|
|
|
const flags = getTestFlags();
|
|
|
|
|
shouldPass = gateFn(flags);
|
|
|
|
|
} catch (e) {
|
2024-05-07 17:15:39 +02:00
|
|
|
test(
|
|
|
|
|
testName,
|
|
|
|
|
() => {
|
|
|
|
|
throw e;
|
|
|
|
|
},
|
|
|
|
|
timeoutMS
|
|
|
|
|
);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
if (shouldPass) {
|
2024-05-07 17:15:39 +02:00
|
|
|
test(testName, callback, timeoutMS);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
} else {
|
2024-02-18 18:17:02 +01:00
|
|
|
const error = new Error(gatedErrorMessage);
|
|
|
|
|
Error.captureStackTrace(error, global._test_gate);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
test(`[GATED, SHOULD FAIL] ${testName}`, () =>
|
2024-05-07 17:15:39 +02:00
|
|
|
expectTestToFail(callback, error, timeoutMS));
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
}
|
|
|
|
|
};
|
2024-08-20 16:40:01 -04:00
|
|
|
global._test_gate_focus = (gateFnOrString, testName, callback, timeoutMS) => {
|
|
|
|
|
const gateFn = coerceGateConditionToFunction(gateFnOrString);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
let shouldPass;
|
|
|
|
|
try {
|
|
|
|
|
const flags = getTestFlags();
|
|
|
|
|
shouldPass = gateFn(flags);
|
|
|
|
|
} catch (e) {
|
2024-05-07 17:15:39 +02:00
|
|
|
test.only(
|
|
|
|
|
testName,
|
|
|
|
|
() => {
|
|
|
|
|
throw e;
|
|
|
|
|
},
|
|
|
|
|
timeoutMS
|
|
|
|
|
);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
if (shouldPass) {
|
2024-05-07 17:15:39 +02:00
|
|
|
test.only(testName, callback, timeoutMS);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
} else {
|
2024-02-18 18:17:02 +01:00
|
|
|
const error = new Error(gatedErrorMessage);
|
|
|
|
|
Error.captureStackTrace(error, global._test_gate_focus);
|
2024-05-07 17:15:39 +02:00
|
|
|
test.only(
|
|
|
|
|
`[GATED, SHOULD FAIL] ${testName}`,
|
|
|
|
|
() => expectTestToFail(callback, error),
|
|
|
|
|
timeoutMS
|
|
|
|
|
);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
// Dynamic version of @gate pragma
|
2024-08-20 16:40:01 -04:00
|
|
|
global.gate = gateFnOrString => {
|
|
|
|
|
const gateFn = coerceGateConditionToFunction(gateFnOrString);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
const flags = getTestFlags();
|
2024-08-20 16:40:01 -04:00
|
|
|
return gateFn(flags);
|
Add pragma for feature testing: @gate (#18581)
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
2020-04-13 10:14:34 -07:00
|
|
|
};
|
2024-12-03 13:13:35 -08:00
|
|
|
|
|
|
|
|
// We augment JSDOM to produce a document that has a loading readyState by default
|
|
|
|
|
// and can be changed. We mock it here globally so we don't have to import our special
|
|
|
|
|
// mock in every file.
|
|
|
|
|
jest.mock('jsdom', () => {
|
|
|
|
|
return require('internal-test-utils/ReactJSDOM.js');
|
|
|
|
|
});
|
2016-11-13 14:00:07 -08:00
|
|
|
}
|
[Flight] Track Awaits on I/O as Debug Info (#33388)
This lets us track what data each Server Component depended on. This
will be used by Performance Track and React DevTools.
We use Node.js `async_hooks`. This has a number of downside. It is
Node.js specific so this feature is not available in other runtimes
until something equivalent becomes available. It's [discouraged by
Node.js docs](https://nodejs.org/api/async_hooks.html#async-hooks). It's
also slow which makes this approach only really viable in development
mode. At least with stack traces. However, it's really the only solution
that gives us the data that we need.
The [Diagnostic
Channel](https://nodejs.org/api/diagnostics_channel.html) API is not
sufficient. Not only is many Node.js built-in APIs missing but all
libraries like databases are also missing. Were as `async_hooks` covers
pretty much anything async in the Node.js ecosystem.
However, even if coverage was wider it's not actually showing the
information we want. It's not enough to show the low level I/O that is
happening because that doesn't provide the context. We need the stack
trace in user space code where it was initiated and where it was
awaited. It's also not each low level socket operation that we want to
surface but some higher level concept which can span a sequence of I/O
operations but as far as user space is concerned.
Therefore this solution is anchored on stack traces and ignore listing
to determine what the interesting span is. It is somewhat
Promise-centric (and in particular async/await) because it allows us to
model an abstract span instead of just random I/O. Async/await points
are also especially useful because this allows Async Stacks to show the
full sequence which is not supported by random callbacks. However, if no
Promises are involved we still to our best to show the stack causing
plain I/O callbacks.
Additionally, we don't want to track all possible I/O. For example,
side-effects like logging that doesn't affect the rendering performance
doesn't need to be included. We only want to include things that
actually block the rendering output. We also need to track which data
blocks each component so that we can track which data caused a
particular subtree to suspend.
We can do this using `async_hooks` because we can track the graph of
what resolved what and then spawned what.
To track what suspended what, something has to resolve. Therefore it
needs to run to completion before we can show what it was suspended on.
So something that never resolves, won't be tracked for example.
We use the `async_hooks` in `ReactFlightServerConfigDebugNode` to build
up an `ReactFlightAsyncSequence` graph that collects the stack traces
for basically all I/O and Promises allocated in the whole app. This is
pretty heavy, especially the stack traces, but it's because we don't
know which ones we'll need until they resolve. We don't materialize the
stacks until we need them though.
Once they end up pinging the Flight runtime, we collect which current
executing task that pinged the runtime and then log the sequence that
led up until that runtime into the RSC protocol. Currently we only
include things that weren't already resolved before we started rendering
this task/component, so that we don't log the entire history each time.
Each operation is split into two parts. First a `ReactIOInfo` which
represents an I/O operation and its start/end time. Basically the start
point where it was start. This is basically represents where you called
`new Promise()` or when entering an `async function` which has an
implied Promise. It can be started in a different component than where
it's awaited and it can be awaited in multiple places. Therefore this is
global information and not associated with a specific Component.
The second part is `ReactAsyncInfo`. This represents where this I/O was
`await`:ed or `.then()` called. This is associated with a point in the
tree (usually the Promise that's a direct child of a Component). Since
you can have multiple different I/O awaited in a sequence technically it
forms a dependency graph but to simplify the model these awaits as
flattened into the `ReactDebugInfo` list. Basically it contains each
await in a sequence that affected this part from unblocking.
This means that the same `ReactAsyncInfo` can appear in mutliple
components if they all await the same `ReactIOInfo` but the same Promise
only appears once.
Promises that are only resolved by other Promises or immediately are not
considered here. Only if they're resolved by an I/O operation. We pick
the Promise basically on the border between user space code and ignored
listed code (`node_modules`) to pick the most specific span but abstract
enough to not give too much detail irrelevant to the current audience.
Similarly, the deepest `await` in user space is marked as the relevant
`await` point.
This feature is only available in the `node` builds of React. Not if you
use the `edge` builds inside of Node.js.
---------
Co-authored-by: Sebastian "Sebbie" Silbermann <silbermann.sebastian@gmail.com>
2025-06-03 14:14:40 -04:00
|
|
|
|
|
|
|
|
// We mock createHook so that we can automatically clean it up.
|
|
|
|
|
let installedHook = null;
|
|
|
|
|
jest.mock('async_hooks', () => {
|
|
|
|
|
const actual = jest.requireActual('async_hooks');
|
|
|
|
|
return {
|
|
|
|
|
...actual,
|
|
|
|
|
createHook(config) {
|
2025-06-06 16:26:36 -04:00
|
|
|
if (installedHook) {
|
|
|
|
|
installedHook.disable();
|
[Flight] Track Awaits on I/O as Debug Info (#33388)
This lets us track what data each Server Component depended on. This
will be used by Performance Track and React DevTools.
We use Node.js `async_hooks`. This has a number of downside. It is
Node.js specific so this feature is not available in other runtimes
until something equivalent becomes available. It's [discouraged by
Node.js docs](https://nodejs.org/api/async_hooks.html#async-hooks). It's
also slow which makes this approach only really viable in development
mode. At least with stack traces. However, it's really the only solution
that gives us the data that we need.
The [Diagnostic
Channel](https://nodejs.org/api/diagnostics_channel.html) API is not
sufficient. Not only is many Node.js built-in APIs missing but all
libraries like databases are also missing. Were as `async_hooks` covers
pretty much anything async in the Node.js ecosystem.
However, even if coverage was wider it's not actually showing the
information we want. It's not enough to show the low level I/O that is
happening because that doesn't provide the context. We need the stack
trace in user space code where it was initiated and where it was
awaited. It's also not each low level socket operation that we want to
surface but some higher level concept which can span a sequence of I/O
operations but as far as user space is concerned.
Therefore this solution is anchored on stack traces and ignore listing
to determine what the interesting span is. It is somewhat
Promise-centric (and in particular async/await) because it allows us to
model an abstract span instead of just random I/O. Async/await points
are also especially useful because this allows Async Stacks to show the
full sequence which is not supported by random callbacks. However, if no
Promises are involved we still to our best to show the stack causing
plain I/O callbacks.
Additionally, we don't want to track all possible I/O. For example,
side-effects like logging that doesn't affect the rendering performance
doesn't need to be included. We only want to include things that
actually block the rendering output. We also need to track which data
blocks each component so that we can track which data caused a
particular subtree to suspend.
We can do this using `async_hooks` because we can track the graph of
what resolved what and then spawned what.
To track what suspended what, something has to resolve. Therefore it
needs to run to completion before we can show what it was suspended on.
So something that never resolves, won't be tracked for example.
We use the `async_hooks` in `ReactFlightServerConfigDebugNode` to build
up an `ReactFlightAsyncSequence` graph that collects the stack traces
for basically all I/O and Promises allocated in the whole app. This is
pretty heavy, especially the stack traces, but it's because we don't
know which ones we'll need until they resolve. We don't materialize the
stacks until we need them though.
Once they end up pinging the Flight runtime, we collect which current
executing task that pinged the runtime and then log the sequence that
led up until that runtime into the RSC protocol. Currently we only
include things that weren't already resolved before we started rendering
this task/component, so that we don't log the entire history each time.
Each operation is split into two parts. First a `ReactIOInfo` which
represents an I/O operation and its start/end time. Basically the start
point where it was start. This is basically represents where you called
`new Promise()` or when entering an `async function` which has an
implied Promise. It can be started in a different component than where
it's awaited and it can be awaited in multiple places. Therefore this is
global information and not associated with a specific Component.
The second part is `ReactAsyncInfo`. This represents where this I/O was
`await`:ed or `.then()` called. This is associated with a point in the
tree (usually the Promise that's a direct child of a Component). Since
you can have multiple different I/O awaited in a sequence technically it
forms a dependency graph but to simplify the model these awaits as
flattened into the `ReactDebugInfo` list. Basically it contains each
await in a sequence that affected this part from unblocking.
This means that the same `ReactAsyncInfo` can appear in mutliple
components if they all await the same `ReactIOInfo` but the same Promise
only appears once.
Promises that are only resolved by other Promises or immediately are not
considered here. Only if they're resolved by an I/O operation. We pick
the Promise basically on the border between user space code and ignored
listed code (`node_modules`) to pick the most specific span but abstract
enough to not give too much detail irrelevant to the current audience.
Similarly, the deepest `await` in user space is marked as the relevant
`await` point.
This feature is only available in the `node` builds of React. Not if you
use the `edge` builds inside of Node.js.
---------
Co-authored-by: Sebastian "Sebbie" Silbermann <silbermann.sebastian@gmail.com>
2025-06-03 14:14:40 -04:00
|
|
|
}
|
|
|
|
|
return (installedHook = actual.createHook(config));
|
|
|
|
|
},
|
|
|
|
|
};
|
|
|
|
|
});
|