We've heard from multiple contributors that the Reconciler forking
mechanism was confusing and/or annoying to deal with. Since it's
currently unused and there's no immediate plans to start using it again,
this removes the forking.
Fully removing the fork is split into 2 steps to preserve file history:
**This PR**
- remove `enableNewReconciler` feature flag.
- remove `unstable_isNewReconciler` export
- remove eslint rules for cross fork imports
- remove `*.new.js` files and update imports
- merge non-suffixed files into `*.old` files where both exist
(sometimes types were defined there)
**#25775**
- rename `*.old` files
* Add fetch instrumentation in cached contexts
* Avoid unhandled rejection errors for Promises that we intentionally ignore
In the final passes, we ignore the newly generated Promises and use
the previous ones. This ensures that if those generate errors, that we
intentionally ignore those.
* Add extra fetch properties if there were any
* Move Fizz inline instructions to unified module
Instead of a separate module per instruction, this exports all of them
from a unified module.
In the next step, I'll add a script to generate this new module.
* Add script to generate inline Fizz runtime
This adds a script to generate the inline Fizz runtime. Previously, the
runtime source was in an inline comment, and a compiled version of the
instructions were hardcoded as strings into the Fizz implementation,
where they are injected into the HTML stream.
I've moved the source for the instructions to a regular JavaScript
module. A script compiles the instructions with Closure, then generates
another module that exports the compiled instructions as strings.
Then the Fizz runtime imports the instructions from the
generated module.
To build the instructions, run:
yarn generate-inline-fizz-runtime
In the next step, I'll add a CI check to verify that the generated files
are up to date.
* Check in CI if generated Fizz runtime is in sync
The generated Fizz runtime is checked into source. In CI, we'll ensure
it stays in sync by running the script and confirming nothing changed.
- method unbinding is no longer supported in Flow for soundness, this added a bunch of suppressions
- Flow now prevents objects to be supertypes of interfaces/classes
ghstack-source-id: d7749cbad8
Pull Request resolved: https://github.com/facebook/react/pull/25412
This upgrade made more expressions invalidate refinements. In some
places this lead to a large number of suppressions that I automatically
suppressed and should be followed up on when the code is touched.
I think most of them might require either manual annotations or moving
a value into a const to allow refinement.
ghstack-source-id: a45b40abf0
Pull Request resolved: https://github.com/facebook/react/pull/25410
This was a large upgrade that removed "classic mode" and made "types first" the only option.
Most of the needed changes have been done in previous PRs, this just fixes up the last few instances.
ghstack-source-id: 9612d95ba4
Pull Request resolved: https://github.com/facebook/react/pull/25408
This update range includes:
- `types_first` ([blog](https://flow.org/en/docs/lang/types-first/), all exports need annotated types) is default. I disabled this for now to make that change incremental.
- Generics that escape the scope they are defined in are an error. I fixed some with explicit type annotations and some are suppressed that I didn't easily figure out.
The shell package wasn't compiling because yarn build-for-devtools was incorrect. The react-dom/test package was renamed to react-dom/unstable_testing. This PR fixes this in the package.json.
Note: Adding packages to the yarn build-for-devtools command isn't great in the long run. Eventually we should make devtools have its own build script.
* Remove object-assign polyfill
We really rely on a more modern environment where this is typically
polyfilled anyway and we don't officially support IE with more extensive
polyfilling anyway. So all environments should have the native version
by now.
* Use shared/assign instead of Object.assign in code
This is so that we have one cached local instance in the bundle.
Ideally we should have a compile do this for us but we already follow
this pattern with hasOwnProperty, isArray, Object.is etc.
* Transform Object.assign to now use shared/assign
We need this to use the shared instance when Object.spread is used.
Checks that if one React package depends on another, the current
version satisfies the given dependency range.
That way we don't forget to bump dependencies when we release a
new version.
This change adds a new "react-dom/unstable_testing" entry point but I believe its contents will exactly match "react-dom/index" for the stable build. (The experimental build will have the added new selector APIs.)
* Output FIXME during build for unminified errors
The invariant Babel transform used to output a FIXME comment if it
could not find a matching error code. This could happen if there were
a configuration mistake that caused an unminified message to
slip through.
Linting the compiled bundles is the most reliable way to do it because
there's not a one-to-one mapping between source modules and bundles. For
example, the same source module may appear in multiple bundles, some
which are minified and others which aren't.
This updates the transform to output the same messages for Error calls.
The source lint rule is still useful for catching mistakes during
development, to prompt you to update the error codes map before pushing
the PR to CI.
* Don't run error transform in development
We used to run the error transform in both production and development,
because in development it was used to convert `invariant` calls into
throw statements.
Now that don't use `invariant` anymore, we only have to run the
transform for production builds.
* Add ! to FIXME comment so Closure doesn't strip it
Don't love this solution because Closure could change this heuristic,
or we could switch to a differnt compiler that doesn't support it. But
it works.
Could add a bundle that contains an unminified error solely for the
purpose of testing it, but that seems like overkill.
* Alternate extract-errors that scrapes artifacts
The build script outputs a special FIXME comment when it fails to minify
an error message. CI will detect these comments and fail the workflow.
The comments also include the expected error message. So I added an
alternate extract-errors that scrapes unminified messages from the
build artifacts and updates `codes.json`.
This is nice because it works on partial builds. And you can also run it
after the fact, instead of needing build all over again.
* Disable error minification in more bundles
Not worth it because the number of errors does not outweight the size
of the formatProdErrorMessage runtime.
* Run extract-errors script in CI
The lint_build job already checks for unminified errors, but the output
isn't super helpful.
Instead I've added a new job that runs the extract-errors script and
fails the build if `codes.json` changes. It also outputs the expected
diff so you can easily see which messages were missing from the map.
* Replace old extract-errors script with new one
Deletes the old extract-errors in favor of extract-errors2
The `download-experimental-build` script has been flaky on CodeSandbox
CI, I think because of GitHub rate limiting.
Until we figure out how to fix that, I've updated it to fall back to a
local build.
* Remove reentrant check from Fizz/Flight
* Make startFlowing explicit in Flight
This is already an explicit call in Fizz. This moves flowing to be explicit.
That way we can avoid calling it in start() for web streams and therefore
avoid the reentrant call.
* Add regression test
This test doesn't actually error due to the streams polyfill not behaving
like Chrome but rather according to spec.
* Update the Web Streams polyfill
Not that we need this but just in case there are differences that are fixed.
Update all our local scripts to use `build` instead of `build2`.
There are still downstream scripts that depend on `build2`, though, so
we can't remove it yet.
* Track all suspended work while it's still pending
This allows us to abort work and put everything into client rendered mode
if we don't want to wait for further I/O.
It also allows us to cancel fallbacks if we complete the main content
before the fallback.
* Expose abort API to the browser streams
Since this API already returns a value, we need to use destructuring to
expose more options.
* Add a test including the client actually client rendering it
* Use AbortSignal option for W3C streams instead of external control
* Clean up listener after it's used once