2015-08-10 16:44:38 -07:00
|
|
|
/* eslint-disable */
|
2016-08-10 17:44:36 -07:00
|
|
|
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
const NODE_ENV = process.env.NODE_ENV;
|
|
|
|
|
if (NODE_ENV !== 'development' && NODE_ENV !== 'production') {
|
|
|
|
|
throw new Error('NODE_ENV must either be set to development or production.');
|
|
|
|
|
}
|
|
|
|
|
global.__DEV__ = NODE_ENV === 'development';
|
2020-10-12 22:07:10 +05:00
|
|
|
global.__EXTENSION__ = false;
|
|
|
|
|
global.__TEST__ = NODE_ENV === 'test';
|
2018-06-11 13:16:27 -07:00
|
|
|
global.__PROFILE__ = NODE_ENV === 'development';
|
2019-10-15 15:09:19 -07:00
|
|
|
|
|
|
|
|
const RELEASE_CHANNEL = process.env.RELEASE_CHANNEL;
|
|
|
|
|
|
|
|
|
|
// Default to running tests in experimental mode. If the release channel is
|
|
|
|
|
// set via an environment variable, then check if it's "experimental".
|
|
|
|
|
global.__EXPERIMENTAL__ =
|
|
|
|
|
typeof RELEASE_CHANNEL === 'string'
|
|
|
|
|
? RELEASE_CHANNEL === 'experimental'
|
|
|
|
|
: true;
|
Run Jest in production mode (#11616)
* Move Jest setup files to /dev/ subdirectory
* Clone Jest /dev/ files into /prod/
* Move shared code into scripts/jest
* Move Jest config into the scripts folder
* Fix the equivalence test
It fails because the config is now passed to Jest explicitly.
But the test doesn't know about the config.
To fix this, we just run it via `yarn test` (which includes the config).
We already depend on Yarn for development anyway.
* Add yarn test-prod to run Jest with production environment
* Actually flip the production tests to run in prod environment
This produces a bunch of errors:
Test Suites: 64 failed, 58 passed, 122 total
Tests: 740 failed, 26 skipped, 1809 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Ignore expectDev() calls in production
Down from 740 to 175 failed.
Test Suites: 44 failed, 78 passed, 122 total
Tests: 175 failed, 26 skipped, 2374 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Decode errors so tests can assert on their messages
Down from 175 to 129.
Test Suites: 33 failed, 89 passed, 122 total
Tests: 129 failed, 1029 skipped, 1417 passed, 2575 total
Snapshots: 16 failed, 4 passed, 20 total
* Remove ReactDOMProduction-test
There is no need for it now. The only test that was special is moved into ReactDOM-test.
* Remove production switches from ReactErrorUtils
The tests now run in production in a separate pass.
* Add and use spyOnDev() for warnings
This ensures that by default we expect no warnings in production bundles.
If the warning *is* expected, use the regular spyOn() method.
This currently breaks all expectDev() assertions without __DEV__ blocks so we go back to:
Test Suites: 56 failed, 65 passed, 121 total
Tests: 379 failed, 1029 skipped, 1148 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Replace expectDev() with expect() in __DEV__ blocks
We started using spyOnDev() for console warnings to ensure we don't *expect* them to occur in production. As a consequence, expectDev() assertions on console.error.calls fail because console.error.calls doesn't exist. This is actually good because it would help catch accidental warnings in production.
To solve this, we are getting rid of expectDev() altogether, and instead introduce explicit expectation branches. We'd need them anyway for testing intentional behavior differences.
This commit replaces all expectDev() calls with expect() calls in __DEV__ blocks. It also removes a few unnecessary expect() checks that no warnings were produced (by also removing the corresponding spyOnDev() calls).
Some DEV-only assertions used plain expect(). Those were also moved into __DEV__ blocks.
ReactFiberErrorLogger was special because it console.error()'s in production too. So in that case I intentionally used spyOn() instead of spyOnDev(), and added extra assertions.
This gets us down to:
Test Suites: 21 failed, 100 passed, 121 total
Tests: 72 failed, 26 skipped, 2458 passed, 2556 total
Snapshots: 16 failed, 4 passed, 20 total
* Enable User Timing API for production testing
We could've disabled it, but seems like a good idea to test since we use it at FB.
* Test for explicit Object.freeze() differences between PROD and DEV
This is one of the few places where DEV and PROD behavior differs for performance reasons.
Now we explicitly test both branches.
* Run Jest via "yarn test" on CI
* Remove unused variable
* Assert different error messages
* Fix error handling tests
This logic is really complicated because of the global ReactFiberErrorLogger mock.
I understand it now, so I added TODOs for later.
It can be much simpler if we change the rest of the tests that assert uncaught errors to also assert they are logged as warnings.
Which mirrors what happens in practice anyway.
* Fix more assertions
* Change tests to document the DEV/PROD difference for state invariant
It is very likely unintentional but I don't want to change behavior in this PR.
Filed a follow up as https://github.com/facebook/react/issues/11618.
* Remove unnecessary split between DEV/PROD ref tests
* Fix more test message assertions
* Make validateDOMNesting tests DEV-only
* Fix error message assertions
* Document existing DEV/PROD message difference (possible bug)
* Change mocking assertions to be DEV-only
* Fix the error code test
* Fix more error message assertions
* Fix the last failing test due to known issue
* Run production tests on CI
* Unify configuration
* Fix coverage script
* Remove expectDev from eslintrc
* Run everything in band
We used to before, too. I just forgot to add the arguments after deleting the script.
2017-11-22 13:02:26 +00:00
|
|
|
|
Add test run that uses www feature flags (#18234)
In CI, we run our test suite against multiple build configurations. For
example, we run our tests in both dev and prod, and in both the
experimental and stable release channels. This is to prevent accidental
deviations in behavior between the different builds. If there's an
intentional deviation in behavior, the test author must account
for them.
However, we currently don't run tests against the www builds. That's
a problem, because it's common for features to land in www before they
land anywhere else, including the experimental release channel.
Typically we do this so we can gradually roll out the feature behind
a flag before deciding to enable it.
The way we test those features today is by mutating the
`shared/ReactFeatureFlags` module. There are a few downsides to this
approach, though. The flag is only overridden for the specific tests or
test suites where you apply the override. But usually what you want is
to run *all* tests with the flag enabled, to protect against unexpected
regressions.
Also, mutating the feature flags module only works when running the
tests against source, not against the final build artifacts, because the
ReactFeatureFlags module is inlined by the build script.
Instead, we should run the test suite against the www configuration,
just like we do for prod, experimental, and so on. I've added a new
command, `yarn test-www`. It automatically runs in CI.
Some of the www feature flags are dynamic; that is, they depend on
a runtime condition (i.e. a GK). These flags are imported from an
external module that lives in www. Those flags will be enabled for some
clients and disabled for others, so we should run the tests against
*both* modes.
So I've added a new global `__VARIANT__`, and a new test command `yarn
test-www-variant`. `__VARIANT__` is set to false by default; when
running `test-www-variant`, it's set to true.
If we were going for *really* comprehensive coverage, we would run the
tests against every possible configuration of feature flags: 2 ^
numberOfFlags total combinations. That's not practical, though, so
instead we only run against two combinations: once with `__VARIANT__`
set to `true`, and once with it set to `false`. We generally assume that
flags can be toggled independently, so in practice this should
be enough.
You can also refer to `__VARIANT__` in tests to detect which mode you're
running in. Or, you can import `shared/ReactFeatureFlags` and read the
specific flag you can about. However, we should stop mutating that
module going forward. Treat it as read-only.
In this commit, I have only setup the www tests to run against source.
I'll leave running against build for a follow up.
Many of our tests currently assume they run only in the default
configuration, and break when certain flags are toggled. Rather than fix
these all up front, I've hard-coded the relevant flags to the default
values. We can incrementally migrate those tests later.
2020-03-06 09:29:05 -08:00
|
|
|
global.__VARIANT__ = !!process.env.VARIANT;
|
|
|
|
|
|
2018-01-04 18:57:30 +00:00
|
|
|
if (typeof window !== 'undefined') {
|
2023-02-10 00:07:49 +08:00
|
|
|
} else {
|
|
|
|
|
global.AbortController =
|
|
|
|
|
require('abortcontroller-polyfill/dist/cjs-ponyfill').AbortController;
|
2018-01-02 18:42:18 +00:00
|
|
|
}
|