Files
react/scripts/jest/setupTests.js

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

322 lines
10 KiB
JavaScript
Raw Permalink Normal View History

'use strict';
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
const {getTestFlags} = require('./TestFlags');
const {
assertConsoleLogsCleared,
resetAllUnexpectedConsoleCalls,
patchConsoleMethods,
} = require('internal-test-utils/consoleMock');
const path = require('path');
if (process.env.REACT_CLASS_EQUIVALENCE_TEST) {
// Inside the class equivalence tester, we have a custom environment, let's
// require that instead.
Run 90% of tests on compiled bundles (both development and production) (#11633) * Extract Jest config into a separate file * Refactor Jest scripts directory structure Introduces a more consistent naming scheme. * Add yarn test-bundles and yarn test-prod-bundles Only files ending with -test.public.js are opted in (so far we don't have any). * Fix error decoding for production bundles GCC seems to remove `new` from `new Error()` which broke our proxy. * Build production version of react-noop-renderer This lets us test more bundles. * Switch to blacklist (exclude .private.js tests) * Rename tests that are currently broken against bundles to *-test.internal.js Some of these are using private APIs. Some have other issues. * Add bundle tests to CI * Split private and public ReactJSXElementValidator tests * Remove internal deps from ReactServerRendering-test and make it public * Only run tests directly in __tests__ This lets us share code between test files by placing them in __tests__/utils. * Remove ExecutionEnvironment dependency from DOMServerIntegrationTest It's not necessary since Stack. * Split up ReactDOMServerIntegration into test suite and utilities This enables us to further split it down. Good both for parallelization and extracting public parts. * Split Fragment tests from other DOMServerIntegration tests This enables them to opt other DOMServerIntegration tests into bundle testing. * Split ReactDOMServerIntegration into different test files It was way too slow to run all these in sequence. * Don't reset the cache twice in DOMServerIntegration tests We used to do this to simulate testing separate bundles. But now we actually *do* test bundles. So there is no need for this, as it makes tests slower. * Rename test-bundles* commands to test-build* Also add test-prod-build as alias for test-build-prod because I keep messing them up. * Use regenerator polyfill for react-noop This fixes other issues and finally lets us run ReactNoop tests against a prod bundle. * Run most Incremental tests against bundles Now that GCC generator issue is fixed, we can do this. I split ErrorLogging test separately because it does mocking. Other error handling tests don't need it. * Update sizes * Fix ReactMount test * Enable ReactDOMComponent test * Fix a warning issue uncovered by flat bundle testing With flat bundles, we couldn't produce a good warning for <div onclick={}> on SSR because it doesn't use the event system. However the issue was not visible in normal Jest runs because the event plugins have been injected by the time the test ran. To solve this, I am explicitly passing whether event system is available as an argument to the hook. This makes the behavior consistent between source and bundle tests. Then I change the tests to document the actual logic and _attempt_ to show a nice message (e.g. we know for sure `onclick` is a bad event but we don't know the right name for it on the server so we just say a generic message about camelCase naming convention).
2017-11-23 17:44:58 +00:00
require('./spec-equivalence-reporter/setupTests.js');
} else {
const errorMap = require('../error-codes/codes.json');
// By default, jest.spyOn also calls the spied method.
const spyOn = jest.spyOn;
const noop = jest.fn;
// Can be used to normalize paths in stackframes
global.__REACT_ROOT_PATH_TEST__ = path.resolve(__dirname, '../..');
// Spying on console methods in production builds can mask errors.
// This is why we added an explicit spyOnDev() helper.
// It's too easy to accidentally use the more familiar spyOn() helper though,
// So we disable it entirely.
// Spying on both dev and prod will require using both spyOnDev() and spyOnProd().
global.spyOn = function () {
throw new Error(
'Do not use spyOn(). ' +
'It can accidentally hide unexpected errors in production builds. ' +
'Use spyOnDev(), spyOnProd(), or spyOnDevAndProd() instead.'
);
};
if (process.env.NODE_ENV === 'production') {
global.spyOnDev = noop;
global.spyOnProd = spyOn;
global.spyOnDevAndProd = spyOn;
} else {
global.spyOnDev = spyOn;
global.spyOnProd = noop;
global.spyOnDevAndProd = spyOn;
}
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
expect.extend({
...require('./matchers/reactTestMatchers'),
...require('./matchers/toThrow'),
});
// We have a Babel transform that inserts guards against infinite loops.
// If a loop runs for too many iterations, we throw an error and set this
// global variable. The global lets us detect an infinite loop even if
// the actual error object ends up being caught and ignored. An infinite
// loop must always fail the test!
beforeEach(() => {
global.infiniteLoopError = null;
});
afterEach(() => {
const error = global.infiniteLoopError;
global.infiniteLoopError = null;
if (error) {
throw error;
}
});
// Patch the console to assert that all console error/warn/log calls assert.
patchConsoleMethods({includeLog: !!process.env.CI});
beforeEach(resetAllUnexpectedConsoleCalls);
afterEach(assertConsoleLogsCleared);
// TODO: enable this check so we don't forget to reset spyOnX mocks.
// afterEach(() => {
// if (
// console[methodName] !== mockMethod &&
// !jest.isMockFunction(console[methodName])
// ) {
// throw new Error(
// `Test did not tear down console.${methodName} mock properly.`
// );
// }
// });
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
if (process.env.NODE_ENV === 'production') {
// In production, we strip error messages and turn them into codes.
// This decodes them back so that the test assertions on them work.
// 1. `ErrorProxy` decodes error messages at Error construction time and
// also proxies error instances with `proxyErrorInstance`.
// 2. `proxyErrorInstance` decodes error messages when the `message`
// property is changed.
const decodeErrorMessage = function (message) {
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
if (!message) {
return message;
}
const re = /react.dev\/errors\/(\d+)?\??([^\s]*)/;
let matches = message.match(re);
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
if (!matches || matches.length !== 3) {
// Some tests use React 17, when the URL was different.
const re17 = /error-decoder.html\?invariant=(\d+)([^\s]*)/;
matches = message.match(re17);
if (!matches || matches.length !== 3) {
return message;
}
}
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
const code = parseInt(matches[1], 10);
const args = matches[2]
.split('&')
.filter(s => s.startsWith('args[]='))
.map(s => s.slice('args[]='.length))
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
.map(decodeURIComponent);
const format = errorMap[code];
let argIndex = 0;
return format.replace(/%s/g, () => args[argIndex++]);
};
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
const OriginalError = global.Error;
// V8's Error.captureStackTrace (used in Jest) fails if the error object is
// a Proxy, so we need to pass it the unproxied instance.
const originalErrorInstances = new WeakMap();
const captureStackTrace = function (error, ...args) {
return OriginalError.captureStackTrace.call(
this,
originalErrorInstances.get(error) ||
// Sometimes this wrapper receives an already-unproxied instance.
error,
...args
);
};
const proxyErrorInstance = error => {
const proxy = new Proxy(error, {
set(target, key, value, receiver) {
if (key === 'message') {
return Reflect.set(
target,
key,
decodeErrorMessage(value),
receiver
);
}
return Reflect.set(target, key, value, receiver);
},
get(target, key, receiver) {
if (key === 'stack') {
// https://github.com/nodejs/node/issues/60862
return Reflect.get(target, key);
}
return Reflect.get(target, key, receiver);
},
});
originalErrorInstances.set(proxy, error);
return proxy;
};
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
const ErrorProxy = new Proxy(OriginalError, {
Run 90% of tests on compiled bundles (both development and production) (#11633) * Extract Jest config into a separate file * Refactor Jest scripts directory structure Introduces a more consistent naming scheme. * Add yarn test-bundles and yarn test-prod-bundles Only files ending with -test.public.js are opted in (so far we don't have any). * Fix error decoding for production bundles GCC seems to remove `new` from `new Error()` which broke our proxy. * Build production version of react-noop-renderer This lets us test more bundles. * Switch to blacklist (exclude .private.js tests) * Rename tests that are currently broken against bundles to *-test.internal.js Some of these are using private APIs. Some have other issues. * Add bundle tests to CI * Split private and public ReactJSXElementValidator tests * Remove internal deps from ReactServerRendering-test and make it public * Only run tests directly in __tests__ This lets us share code between test files by placing them in __tests__/utils. * Remove ExecutionEnvironment dependency from DOMServerIntegrationTest It's not necessary since Stack. * Split up ReactDOMServerIntegration into test suite and utilities This enables us to further split it down. Good both for parallelization and extracting public parts. * Split Fragment tests from other DOMServerIntegration tests This enables them to opt other DOMServerIntegration tests into bundle testing. * Split ReactDOMServerIntegration into different test files It was way too slow to run all these in sequence. * Don't reset the cache twice in DOMServerIntegration tests We used to do this to simulate testing separate bundles. But now we actually *do* test bundles. So there is no need for this, as it makes tests slower. * Rename test-bundles* commands to test-build* Also add test-prod-build as alias for test-build-prod because I keep messing them up. * Use regenerator polyfill for react-noop This fixes other issues and finally lets us run ReactNoop tests against a prod bundle. * Run most Incremental tests against bundles Now that GCC generator issue is fixed, we can do this. I split ErrorLogging test separately because it does mocking. Other error handling tests don't need it. * Update sizes * Fix ReactMount test * Enable ReactDOMComponent test * Fix a warning issue uncovered by flat bundle testing With flat bundles, we couldn't produce a good warning for <div onclick={}> on SSR because it doesn't use the event system. However the issue was not visible in normal Jest runs because the event plugins have been injected by the time the test ran. To solve this, I am explicitly passing whether event system is available as an argument to the hook. This makes the behavior consistent between source and bundle tests. Then I change the tests to document the actual logic and _attempt_ to show a nice message (e.g. we know for sure `onclick` is a bad event but we don't know the right name for it on the server so we just say a generic message about camelCase naming convention).
2017-11-23 17:44:58 +00:00
apply(target, thisArg, argumentsList) {
const error = Reflect.apply(target, thisArg, argumentsList);
error.message = decodeErrorMessage(error.message);
return proxyErrorInstance(error);
Run 90% of tests on compiled bundles (both development and production) (#11633) * Extract Jest config into a separate file * Refactor Jest scripts directory structure Introduces a more consistent naming scheme. * Add yarn test-bundles and yarn test-prod-bundles Only files ending with -test.public.js are opted in (so far we don't have any). * Fix error decoding for production bundles GCC seems to remove `new` from `new Error()` which broke our proxy. * Build production version of react-noop-renderer This lets us test more bundles. * Switch to blacklist (exclude .private.js tests) * Rename tests that are currently broken against bundles to *-test.internal.js Some of these are using private APIs. Some have other issues. * Add bundle tests to CI * Split private and public ReactJSXElementValidator tests * Remove internal deps from ReactServerRendering-test and make it public * Only run tests directly in __tests__ This lets us share code between test files by placing them in __tests__/utils. * Remove ExecutionEnvironment dependency from DOMServerIntegrationTest It's not necessary since Stack. * Split up ReactDOMServerIntegration into test suite and utilities This enables us to further split it down. Good both for parallelization and extracting public parts. * Split Fragment tests from other DOMServerIntegration tests This enables them to opt other DOMServerIntegration tests into bundle testing. * Split ReactDOMServerIntegration into different test files It was way too slow to run all these in sequence. * Don't reset the cache twice in DOMServerIntegration tests We used to do this to simulate testing separate bundles. But now we actually *do* test bundles. So there is no need for this, as it makes tests slower. * Rename test-bundles* commands to test-build* Also add test-prod-build as alias for test-build-prod because I keep messing them up. * Use regenerator polyfill for react-noop This fixes other issues and finally lets us run ReactNoop tests against a prod bundle. * Run most Incremental tests against bundles Now that GCC generator issue is fixed, we can do this. I split ErrorLogging test separately because it does mocking. Other error handling tests don't need it. * Update sizes * Fix ReactMount test * Enable ReactDOMComponent test * Fix a warning issue uncovered by flat bundle testing With flat bundles, we couldn't produce a good warning for <div onclick={}> on SSR because it doesn't use the event system. However the issue was not visible in normal Jest runs because the event plugins have been injected by the time the test ran. To solve this, I am explicitly passing whether event system is available as an argument to the hook. This makes the behavior consistent between source and bundle tests. Then I change the tests to document the actual logic and _attempt_ to show a nice message (e.g. we know for sure `onclick` is a bad event but we don't know the right name for it on the server so we just say a generic message about camelCase naming convention).
2017-11-23 17:44:58 +00:00
},
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
construct(target, argumentsList, newTarget) {
const error = Reflect.construct(target, argumentsList, newTarget);
error.message = decodeErrorMessage(error.message);
return proxyErrorInstance(error);
},
get(target, key, receiver) {
if (key === 'captureStackTrace') {
return captureStackTrace;
}
return Reflect.get(target, key, receiver);
Run Jest in production mode (#11616) * Move Jest setup files to /dev/ subdirectory * Clone Jest /dev/ files into /prod/ * Move shared code into scripts/jest * Move Jest config into the scripts folder * Fix the equivalence test It fails because the config is now passed to Jest explicitly. But the test doesn't know about the config. To fix this, we just run it via `yarn test` (which includes the config). We already depend on Yarn for development anyway. * Add yarn test-prod to run Jest with production environment * Actually flip the production tests to run in prod environment This produces a bunch of errors: Test Suites: 64 failed, 58 passed, 122 total Tests: 740 failed, 26 skipped, 1809 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Ignore expectDev() calls in production Down from 740 to 175 failed. Test Suites: 44 failed, 78 passed, 122 total Tests: 175 failed, 26 skipped, 2374 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Decode errors so tests can assert on their messages Down from 175 to 129. Test Suites: 33 failed, 89 passed, 122 total Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total Snapshots: 16 failed, 4 passed, 20 total * Remove ReactDOMProduction-test There is no need for it now. The only test that was special is moved into ReactDOM-test. * Remove production switches from ReactErrorUtils The tests now run in production in a separate pass. * Add and use spyOnDev() for warnings This ensures that by default we expect no warnings in production bundles. If the warning *is* expected, use the regular spyOn() method. This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to: Test Suites: 56 failed, 65 passed, 121 total Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Replace expectDev() with expect() in __DEV__ blocks We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production. To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences. This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls). Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks. ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions. This gets us down to: Test Suites: 21 failed, 100 passed, 121 total Tests: 72 failed, 26 skipped, 2458 passed, 2556 total Snapshots: 16 failed, 4 passed, 20 total * Enable User Timing API for production testing We could've disabled it, but seems like a good idea to test since we use it at FB. * Test for explicit Object.freeze() differences between PROD and DEV This is one of the few places where DEV and PROD behavior differs for performance reasons. Now we explicitly test both branches. * Run Jest via "yarn test" on CI * Remove unused variable * Assert different error messages * Fix error handling tests This logic is really complicated because of the global ReactFiberErrorLogger mock. I understand it now, so I added TODOs for later. It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings. Which mirrors what happens in practice anyway. * Fix more assertions * Change tests to document the DEV/PROD difference for state invariant It is very likely unintentional but I don't want to change behavior in this PR. Filed a follow up as https://github.com/facebook/react/issues/11618. * Remove unnecessary split between DEV/PROD ref tests * Fix more test message assertions * Make validateDOMNesting tests DEV-only * Fix error message assertions * Document existing DEV/PROD message difference (possible bug) * Change mocking assertions to be DEV-only * Fix the error code test * Fix more error message assertions * Fix the last failing test due to known issue * Run production tests on CI * Unify configuration * Fix coverage script * Remove expectDev from eslintrc * Run everything in band We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
},
});
ErrorProxy.OriginalError = OriginalError;
global.Error = ErrorProxy;
}
Deterministic updates (#10715) * Deterministic updates High priority updates typically require less work to render than low priority ones. It's beneficial to flush those first, in their own batch, before working on more expensive low priority ones. We do this even if a high priority is scheduled after a low priority one. However, we don't want this reordering of updates to affect the terminal state. State should be deterministic: once all work has been flushed, the final state should be the same regardless of how they were scheduled. To get both properties, we store updates on the queue in insertion order instead of priority order (always append). Then, when processing the queue, we skip over updates with insufficient priority. Instead of removing updates from the queue right after processing them, we only remove them if there are no unprocessed updates before it in the list. This means that updates may be processed more than once. As a bonus, the new implementation is simpler and requires less code. * Fix ceiling function Mixed up the operators. * Remove addUpdate, addReplaceState, et al These functions don't really do anything. Simpler to use a single insertUpdateIntoFiber function. Also splits scheduleUpdate into two functions: - scheduleWork traverses a fiber's ancestor path and updates their expiration times. - scheduleUpdate inserts an update into a fiber's update queue, then calls scheduleWork. * Remove getExpirationTime The last remaining use for getExpirationTime was for top-level async updates. I moved that check to scheduleUpdate instead. * Move UpdateQueue insertions back to class module Moves UpdateQueue related functions out of the scheduler and back into the class component module. It's a bit awkward that now we need to pass around createUpdateExpirationForFiber, too. But we can still do without addUpdate, replaceUpdate, et al. * Store callbacks as an array of Updates Simpler this way. Also moves commitCallbacks back to UpdateQueue module. * beginUpdateQueue -> processUpdateQueue * Updates should never have an expiration of NoWork * Rename expiration related functions * Fix update queue Flow types Gets rid of an unneccessary null check
2017-10-13 17:21:25 -07:00
const expectTestToFail = async (callback, errorToThrowIfTestSucceeds) => {
if (callback.length > 0) {
throw Error(
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
'Gated test helpers do not support the `done` callback. Return a ' +
'promise instead.'
);
}
// Install a global error event handler. We treat global error events as
// test failures, same as Jest's default behavior.
//
// Becaused we installed our own error event handler, Jest will not report a
// test failure. Conceptually it's as if we wrapped the entire test event in
// a try-catch.
let didError = false;
const errorEventHandler = () => {
didError = true;
};
// eslint-disable-next-line no-restricted-globals
if (typeof addEventListener === 'function') {
// eslint-disable-next-line no-restricted-globals
addEventListener('error', errorEventHandler);
}
try {
const maybePromise = callback();
if (
maybePromise !== undefined &&
maybePromise !== null &&
typeof maybePromise.then === 'function'
) {
await maybePromise;
}
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
// Flush unexpected console calls inside the test itself, instead of in
// `afterEach` like we normally do. `afterEach` is too late because if it
// throws, we won't have captured it.
assertConsoleLogsCleared();
} catch (testError) {
didError = true;
}
resetAllUnexpectedConsoleCalls();
// eslint-disable-next-line no-restricted-globals
if (typeof removeEventListener === 'function') {
// eslint-disable-next-line no-restricted-globals
removeEventListener('error', errorEventHandler);
}
if (!didError) {
// The test did not error like we expected it to. Report this to Jest as
// a failure.
throw errorToThrowIfTestSucceeds;
}
};
const coerceGateConditionToFunction = gateFnOrString => {
return typeof gateFnOrString === 'string'
? // `gate('foo')` is treated as equivalent to `gate(flags => flags.foo)`
flags => flags[gateFnOrString]
: // Assume this is already a function
gateFnOrString;
};
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
const gatedErrorMessage = 'Gated test was expected to fail, but it passed.';
global._test_gate = (gateFnOrString, testName, callback, timeoutMS) => {
const gateFn = coerceGateConditionToFunction(gateFnOrString);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
let shouldPass;
try {
const flags = getTestFlags();
shouldPass = gateFn(flags);
} catch (e) {
test(
testName,
() => {
throw e;
},
timeoutMS
);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
return;
}
if (shouldPass) {
test(testName, callback, timeoutMS);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
} else {
const error = new Error(gatedErrorMessage);
Error.captureStackTrace(error, global._test_gate);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
test(`[GATED, SHOULD FAIL] ${testName}`, () =>
expectTestToFail(callback, error, timeoutMS));
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
}
};
global._test_gate_focus = (gateFnOrString, testName, callback, timeoutMS) => {
const gateFn = coerceGateConditionToFunction(gateFnOrString);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
let shouldPass;
try {
const flags = getTestFlags();
shouldPass = gateFn(flags);
} catch (e) {
test.only(
testName,
() => {
throw e;
},
timeoutMS
);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
return;
}
if (shouldPass) {
test.only(testName, callback, timeoutMS);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
} else {
const error = new Error(gatedErrorMessage);
Error.captureStackTrace(error, global._test_gate_focus);
test.only(
`[GATED, SHOULD FAIL] ${testName}`,
() => expectTestToFail(callback, error),
timeoutMS
);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
}
};
// Dynamic version of @gate pragma
global.gate = gateFnOrString => {
const gateFn = coerceGateConditionToFunction(gateFnOrString);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
const flags = getTestFlags();
return gateFn(flags);
Add pragma for feature testing: @gate (#18581) * Add pragma for feature testing: @gate The `@gate` pragma declares under which conditions a test is expected to pass. If the gate condition passes, then the test runs normally (same as if there were no pragma). If the conditional fails, then the test runs and is *expected to fail*. An alternative to `it.experimental` and similar proposals. Examples -------- Basic: ```js // @gate enableBlocksAPI test('passes only if Blocks API is available', () => {/*...*/}) ``` Negation: ```js // @gate !disableLegacyContext test('depends on a deprecated feature', () => {/*...*/}) ``` Multiple flags: ```js // @gate enableNewReconciler // @gate experimental test('needs both useEvent and Blocks', () => {/*...*/}) ``` Logical operators (yes, I'm sorry): ```js // @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime) test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/}) ``` Strings, and comparion operators No use case yet but I figure eventually we'd use this to gate on different release channels: ```js // @gate channel === "experimental" || channel === "modern" test('works in OSS experimental or www modern', () => {/*...*/}) ``` How does it work? I'm guessing those last two examples might be controversial. Supporting those cases did require implementing a mini-parser. The output of the transform is very straightforward, though. Input: ```js // @gate a && (b || c) test('some test', () => {/*...*/}) ``` Output: ```js _test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/}); ``` It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and `xit` alone because those tests are disabled anyway. `_test_gate` is a global method that I set up in our Jest config. It works about the same as the existing `it.experimental` helper. The context (`ctx`) argument is whatever we want it to be. I set it up so that it throws if you try to access a flag that doesn't exist. I also added some shortcuts for common gating conditions, like `old` and `new`: ```js // @gate experimental test('experimental feature', () => {/*...*/}) // @gate new test('only passes in new reconciler', () => {/*...*/}) ``` Why implement this as a pragma instead of a runtime API? - Doesn't require monkey patching built-in Jest methods. Instead it compiles to a runtime function that composes Jest's API. - Will be easy to upgrade if Jest ever overhauls their API or we switch to a different testing framework (unlikely but who knows). - It feels lightweight so hopefully people won't feel gross using it. For example, adding or removing a gate pragma will never affect the indentation of the test, unlike if you wrapped the test in a conditional block. * Compatibility with console error/warning tracking We patch console.error and console.warning to track unexpected calls in our tests. If there's an unexpected call, we usually throw inside an `afterEach` hook. However, that's too late for tests that we expect to fail, because our `_test_gate` runtime can't capture the error. So I also check for unexpected calls inside `_test_gate`. * Move test flags to dedicated file Added some instructions for how the flags are set up and how to use them. * Add dynamic version of gate API Receives same flags as the pragma. If we ever decide to revert the pragma, we can codemod them to use this instead.
2020-04-13 10:14:34 -07:00
};
Client render dehydrated Suspense boundaries on document load (#31620) When streaming SSR while hydrating React will wait for Suspense boundaries to be revealed by the SSR stream before attempting to hydrate them. The rationale here is that the Server render is likely further ahead of whatever the client would produce so waiting to let the server stream in the UI is preferable to retrying on the client and possibly delaying how quickly the primary content becomes available. However If the connection closes early (user hits stop for instance) or there is a server error which prevents additional HTML from being delivered to the client this can put React into a broken state where the boundary never resolves nor errors and the hydration never retries that boundary freezing it in it's fallback state. Once the document has fully loaded we know there is not way any additional Suspense boundaries can arrive. This update changes react-dom on the client to schedule client renders for any unfinished Suspense boundaries upon document loading. The technique for client rendering a fallback is pretty straight forward. When hydrating a Suspense boundary if the Document is in 'complete' readyState we interpret pending boundaries as fallback boundaries. If the readyState is not 'complete' we register an event to retry the boundary when the DOMContentLoaded event fires. To test this I needed JSDOM to model readyState. We previously had a temporary implementation of readyState for SSR streaming but I ended up implementing this as a mock of JSDOM that implements a fake readyState that is mutable. It starts off in 'loading' readyState and you can advance it by mutating document.readyState. You can also reset it to 'loading'. It fires events when changing states. This seems like the least invasive way to get closer-to-real-browser behavior in a way that won't require remembering this subtle detail every time you create a test that asserts Suspense resolution order.
2024-12-03 13:13:35 -08:00
// We augment JSDOM to produce a document that has a loading readyState by default
// and can be changed. We mock it here globally so we don't have to import our special
// mock in every file.
jest.mock('jsdom', () => {
return require('internal-test-utils/ReactJSDOM.js');
});
}
[Flight] Track Awaits on I/O as Debug Info (#33388) This lets us track what data each Server Component depended on. This will be used by Performance Track and React DevTools. We use Node.js `async_hooks`. This has a number of downside. It is Node.js specific so this feature is not available in other runtimes until something equivalent becomes available. It's [discouraged by Node.js docs](https://nodejs.org/api/async_hooks.html#async-hooks). It's also slow which makes this approach only really viable in development mode. At least with stack traces. However, it's really the only solution that gives us the data that we need. The [Diagnostic Channel](https://nodejs.org/api/diagnostics_channel.html) API is not sufficient. Not only is many Node.js built-in APIs missing but all libraries like databases are also missing. Were as `async_hooks` covers pretty much anything async in the Node.js ecosystem. However, even if coverage was wider it's not actually showing the information we want. It's not enough to show the low level I/O that is happening because that doesn't provide the context. We need the stack trace in user space code where it was initiated and where it was awaited. It's also not each low level socket operation that we want to surface but some higher level concept which can span a sequence of I/O operations but as far as user space is concerned. Therefore this solution is anchored on stack traces and ignore listing to determine what the interesting span is. It is somewhat Promise-centric (and in particular async/await) because it allows us to model an abstract span instead of just random I/O. Async/await points are also especially useful because this allows Async Stacks to show the full sequence which is not supported by random callbacks. However, if no Promises are involved we still to our best to show the stack causing plain I/O callbacks. Additionally, we don't want to track all possible I/O. For example, side-effects like logging that doesn't affect the rendering performance doesn't need to be included. We only want to include things that actually block the rendering output. We also need to track which data blocks each component so that we can track which data caused a particular subtree to suspend. We can do this using `async_hooks` because we can track the graph of what resolved what and then spawned what. To track what suspended what, something has to resolve. Therefore it needs to run to completion before we can show what it was suspended on. So something that never resolves, won't be tracked for example. We use the `async_hooks` in `ReactFlightServerConfigDebugNode` to build up an `ReactFlightAsyncSequence` graph that collects the stack traces for basically all I/O and Promises allocated in the whole app. This is pretty heavy, especially the stack traces, but it's because we don't know which ones we'll need until they resolve. We don't materialize the stacks until we need them though. Once they end up pinging the Flight runtime, we collect which current executing task that pinged the runtime and then log the sequence that led up until that runtime into the RSC protocol. Currently we only include things that weren't already resolved before we started rendering this task/component, so that we don't log the entire history each time. Each operation is split into two parts. First a `ReactIOInfo` which represents an I/O operation and its start/end time. Basically the start point where it was start. This is basically represents where you called `new Promise()` or when entering an `async function` which has an implied Promise. It can be started in a different component than where it's awaited and it can be awaited in multiple places. Therefore this is global information and not associated with a specific Component. The second part is `ReactAsyncInfo`. This represents where this I/O was `await`:ed or `.then()` called. This is associated with a point in the tree (usually the Promise that's a direct child of a Component). Since you can have multiple different I/O awaited in a sequence technically it forms a dependency graph but to simplify the model these awaits as flattened into the `ReactDebugInfo` list. Basically it contains each await in a sequence that affected this part from unblocking. This means that the same `ReactAsyncInfo` can appear in mutliple components if they all await the same `ReactIOInfo` but the same Promise only appears once. Promises that are only resolved by other Promises or immediately are not considered here. Only if they're resolved by an I/O operation. We pick the Promise basically on the border between user space code and ignored listed code (`node_modules`) to pick the most specific span but abstract enough to not give too much detail irrelevant to the current audience. Similarly, the deepest `await` in user space is marked as the relevant `await` point. This feature is only available in the `node` builds of React. Not if you use the `edge` builds inside of Node.js. --------- Co-authored-by: Sebastian "Sebbie" Silbermann <silbermann.sebastian@gmail.com>
2025-06-03 14:14:40 -04:00
// We mock createHook so that we can automatically clean it up.
let installedHook = null;
jest.mock('async_hooks', () => {
const actual = jest.requireActual('async_hooks');
return {
...actual,
createHook(config) {
if (installedHook) {
installedHook.disable();
[Flight] Track Awaits on I/O as Debug Info (#33388) This lets us track what data each Server Component depended on. This will be used by Performance Track and React DevTools. We use Node.js `async_hooks`. This has a number of downside. It is Node.js specific so this feature is not available in other runtimes until something equivalent becomes available. It's [discouraged by Node.js docs](https://nodejs.org/api/async_hooks.html#async-hooks). It's also slow which makes this approach only really viable in development mode. At least with stack traces. However, it's really the only solution that gives us the data that we need. The [Diagnostic Channel](https://nodejs.org/api/diagnostics_channel.html) API is not sufficient. Not only is many Node.js built-in APIs missing but all libraries like databases are also missing. Were as `async_hooks` covers pretty much anything async in the Node.js ecosystem. However, even if coverage was wider it's not actually showing the information we want. It's not enough to show the low level I/O that is happening because that doesn't provide the context. We need the stack trace in user space code where it was initiated and where it was awaited. It's also not each low level socket operation that we want to surface but some higher level concept which can span a sequence of I/O operations but as far as user space is concerned. Therefore this solution is anchored on stack traces and ignore listing to determine what the interesting span is. It is somewhat Promise-centric (and in particular async/await) because it allows us to model an abstract span instead of just random I/O. Async/await points are also especially useful because this allows Async Stacks to show the full sequence which is not supported by random callbacks. However, if no Promises are involved we still to our best to show the stack causing plain I/O callbacks. Additionally, we don't want to track all possible I/O. For example, side-effects like logging that doesn't affect the rendering performance doesn't need to be included. We only want to include things that actually block the rendering output. We also need to track which data blocks each component so that we can track which data caused a particular subtree to suspend. We can do this using `async_hooks` because we can track the graph of what resolved what and then spawned what. To track what suspended what, something has to resolve. Therefore it needs to run to completion before we can show what it was suspended on. So something that never resolves, won't be tracked for example. We use the `async_hooks` in `ReactFlightServerConfigDebugNode` to build up an `ReactFlightAsyncSequence` graph that collects the stack traces for basically all I/O and Promises allocated in the whole app. This is pretty heavy, especially the stack traces, but it's because we don't know which ones we'll need until they resolve. We don't materialize the stacks until we need them though. Once they end up pinging the Flight runtime, we collect which current executing task that pinged the runtime and then log the sequence that led up until that runtime into the RSC protocol. Currently we only include things that weren't already resolved before we started rendering this task/component, so that we don't log the entire history each time. Each operation is split into two parts. First a `ReactIOInfo` which represents an I/O operation and its start/end time. Basically the start point where it was start. This is basically represents where you called `new Promise()` or when entering an `async function` which has an implied Promise. It can be started in a different component than where it's awaited and it can be awaited in multiple places. Therefore this is global information and not associated with a specific Component. The second part is `ReactAsyncInfo`. This represents where this I/O was `await`:ed or `.then()` called. This is associated with a point in the tree (usually the Promise that's a direct child of a Component). Since you can have multiple different I/O awaited in a sequence technically it forms a dependency graph but to simplify the model these awaits as flattened into the `ReactDebugInfo` list. Basically it contains each await in a sequence that affected this part from unblocking. This means that the same `ReactAsyncInfo` can appear in mutliple components if they all await the same `ReactIOInfo` but the same Promise only appears once. Promises that are only resolved by other Promises or immediately are not considered here. Only if they're resolved by an I/O operation. We pick the Promise basically on the border between user space code and ignored listed code (`node_modules`) to pick the most specific span but abstract enough to not give too much detail irrelevant to the current audience. Similarly, the deepest `await` in user space is marked as the relevant `await` point. This feature is only available in the `node` builds of React. Not if you use the `edge` builds inside of Node.js. --------- Co-authored-by: Sebastian "Sebbie" Silbermann <silbermann.sebastian@gmail.com>
2025-06-03 14:14:40 -04:00
}
return (installedHook = actual.createHook(config));
},
};
});