* Call into Fabric to get current event priority
Fix flow errors
* Prettier
* Better handle null and undefined cases
* Remove optional chaining and use ?? operator
* prettier-all
* Use conditional ternary operator
* prettier
This does not mean that a release of 18.0 is imminent, only that the
main branch includes breaking changes.
Also updates the versioning scheme of the `@next` channel to include
the upcoming semver number, as well as the word "alpha" to indicate the
stability of the release.
- Before: 0.0.0-e0d9b28999
- After: 18.0.0-alpha-e0d9b28999
Since we track these versions in source, we can build `@latest`
releases in CI and store them as artifacts.
Then when it's time to release, and the build has been verified, we use
`prepare-release-from-ci` (the same script we use for `@next` and
`@experimental`) to fetch the already built and versioned packages.
Now that we track package versions in source, `@latest` builds should
be fully reproducible for a given commit. We can prepare the packages in
CI and store them as artifacts, the same way we do for `@next` and
`@experimental`.
Eventually this can replace the interactive script that we currently
use to swap out the version numbers.
The other nice thing about this approach is that we can run tests in CI
to verify that the packages are releasable, instead of waiting until
right before publish.
I named the output directory `oss-stable-semver`, to distinguish from
the `@next` prereleases that are located at `oss-stable`. I don't love
this naming. I'd prefer to use the name of the corresponding npm dist
tag. I'll do that in a follow-up, though, since the `oss-stable` name is
referenced in a handful of places.
Current naming (after this PR):
- `oss-experimental` → `@experimental`
- `oss-stable` → `@next`
- `oss-stable-semver` → `@latest`
Proposed naming (not yet implemented, requires more work):
- `oss-experimental` → `@experimental`
- `oss-next` → `@next`
- `oss-latest` → `@latest`
The versioning scheme for `@next` releases does not include semver
information. Like `@experimental`, the versions are based only on the
hash, i.e. `0.0.0-<commit_sha>`. The reason we do this is to prevent
the use of a tilde (~) or caret (^) to match a range of
prerelease versions.
For `@experimental`, I think this rationale still makes sense — those
releases are very unstable, with frequent breaking changes. But `@next`
is not as volatile. It represents the next stable release. So, I think
we can afford to include an actual verison number at the beginning of
the string instead of `0.0.0`.
We can also add a label that indicates readiness of the upcoming
release, like "alpha", "beta", "rc", etc.
To prepare for this the new versioning scheme, I updated the build
script. However, **this PR does not enable the new versioning scheme
yet**. I left a TODO above the line that we'll change once we're ready.
We need to specify the expected next version numbers for each package,
somewhere. These aren't encoded anywhere today — we don't specify
version numbers until right before publishing to `@latest`, using an
interactive script: `prepare-release-from-npm`.
Instead, what we can do is track these version numbers in a module. I
added `ReactVersions.js` that acts as the single source of truth for
every package's version. The build script uses this module to build the
`@next` packages.
In the future, I want to start building the `@latest` packages the same
way we do `@next` and `@experimental`. (What we do now is download a
`@next` release from npm and swap out its version numbers.) Then we
could run automated tests in CI to confirm the packages are releasable,
instead of waiting to verify that right before publish.
* Resolve the entry point for tests the same way builds do
This way the source tests, test the same entry point configuration.
* Gate test selectors on www
These are currently only exposed in www builds
* Gate createEventHandle / useFocus on www
These are enabled in both www variants but not OSS experimental.
* Temporarily disable www-modern entry point
Use the main one that has all the exports until we fix more tests.
* Remove enableCache override that's no longer correct
* Open gates for www
These used to not be covered because they used Cache which wasn't exposed.
* Convert ES6/TypeScript CoffeeScript Tests to createRoot + act
* Change expectation for WWW+VARIANT because the deferRenderPhaseUpdateToNextBatch flag breaks this behavior
* Define global __WWW__ = true flag during www tests
We already do that for __PERSISTENT__.
* Use @gate www in ReactSuspenseCallback
This allows it to not be internal anymore. We test it against the www build.
* Clean up Scheduler forks
* Un-shadow variables
* Use timer globals directly, add a test for overrides
* Remove more window references
* Don't crash for undefined globals + tests
* Update lint config globals
* Fix test by using async act
* Add test fixture
* Delete test fixture
Uses the layout of the build artifact directory to infer the format
of a given file, and which lint rules to apply.
This has the effect of decoupling the lint build job from the existing
Rollup script, so that if we ever add additional post-processing, or
if we replace Rollup, it will still work.
But the immediate motivation is to replace the separate "stable" and
"experimental" lint-build jobs with a single combined job.
* Add NewContext module
This implements a reverse linked list tree containing the previous
contexts.
* Implement recursive algorithm
This algorithm pops the contexts back to a shared ancestor on the way down
the stack and then pushes new contexts in reverse order up the stack.
* Move isPrimaryRenderer to ServerFormatConfig
This is primarily intended to be used to support renderToString with a
separate build than the main one. This allows them to be nested.
* Wire up more element type matchers
* Wire up Context Provider type
* Wire up Context Consumer
* Test
* Implement reader in class
* Update error codez
My personal workflow is to develop against the www-modern release
channel, with the variant flags enabled, because it encompasses the
largest set of features. Then I rely on CI to run the tests against
all the other configurations.
So in practice, I almost always run
```
yarn test -r=www-modern --variant TEST_FILE
```
instead of
```
yarn test TEST_FILE
```
So, I've updated the `yarn test` command to use those options
by default.
* Implement DOM format config structure
* Styles
* Input warnings
* Textarea special cases
* Select special cases
* Option special cases
We read the currently selected value from the FormatContext.
* Warning for non-lower case HTML
We don't change to lower case at runtime anymore but keep the warning.
* Pre tags innerHTML needs to be prefixed
This is because if you do the equivalent on the client using innerHTML,
this is the effect you'd get.
* Extract errors
* Encode tables as a special insertion mode
The table modes are special in that its children can't be created outside
a table context so we need the segment container to be wrapped in a table.
* Move formatContext from Task to Segment
It works the same otherwise. It's just that this context needs to outlive
the task so that I can use it when writing the segment.
* Use template tag for placeholders and inserted dummy nodes with IDs
These can be used in any parent. At least outside IE11. Not sure yet what
happens in IE11 to these.
Not sure if these are bad for perf since they're special nodes.
* Add special wrappers around inserted segments depending on their insertion mode
* Allow the root namespace to be configured
This allows us to insert the correct wrappers when streaming into an
existing non-HTML tree.
* Add comment
The event priority constants exports by the reconciler package are
meant to be used by the reconciler (host config) itself. So it doesn't
make sense to export them from a module that requires them.
To break the cycle, we can move them to a separate module and import
that. This looks like a "deep import" of an internal module, which we
try to avoid, but conceptually these are part of the public interface
of the reconciler module. So, no different than importing from the main
`react-reconciler`.
We do need to be careful about not mixing these types of imports with
implementation details. Those are the ones to really avoid.
An unintended benefit of the reconciler fork infra is that it makes
deep imports harder. Any module that we treat as "public", like this
one, needs to account for the `enableNewReconciler` flag and forward
to the correct implementation.
Some legacy environments can not encode non-strings. Those would specify
both as strings. They'll throw for binary data.
Some environments have to encode strings (like web streams). Those would
encode both as uint8array.
Some environments (like Node) can do either. It can be beneficial to leave
things as strings in case the native stream can do something smart with it.
* Copy some infra structure patterns from Flight
* Basic data structures
* Move structural nodes and instruction commands to host config
* Move instruction command to host config
In the DOM this is implemented as script tags. The first time it's emitted
it includes the function. Future calls invoke the same function.
The side of the complete boundary function in particular is unfortunately
large.
* Implement Fizz Noop host configs
This is implemented not as a serialized protocol but by-passing the
serialization when possible and instead it's like a live tree being
built.
* Implement React Native host config
This is not wired up. I just need something for the flow types since
Flight and Fizz are both handled by the isServerSupported flag.
Might as well add something though.
The principle of this format is the same structure as for HTML but a
simpler binary format.
Each entry is a tag followed by some data and terminated by null.
* Check in error codes
* Comment
* Move context comparison to consumer
In the lazy context implementation, not all context changes are
propagated from the provider, so we can't rely on the propagation alone
to mark the consumer as dirty. The consumer needs to compare to the
previous value, like we do for state and context.
I added a `memoizedValue` field to the context dependency type. Then in
the consumer, we iterate over the current dependencies to see if
something changed. We only do this iteration after props and state has
already bailed out, so it's a relatively uncommon path, except at the
root of a changed subtree. Alternatively, we could move these
comparisons into `readContext`, but that's a much hotter path, so I
think this is an appropriate trade off.
* [Experiment] Lazily propagate context changes
When a context provider changes, we scan the tree for matching consumers
and mark them as dirty so that we know they have pending work. This
prevents us from bailing out if, say, an intermediate wrapper is
memoized.
Currently, we propagate these changes eagerly, at the provider.
However, in many cases, we would have ended up visiting the consumer
nodes anyway, as part of the normal render traversal, because there's no
memoized node in between that bails out.
We can save CPU cycles by propagating changes only when we hit a
memoized component — so, instead of propagating eagerly at the provider,
we propagate lazily if or when something bails out.
Most of our bailout logic is centralized in
`bailoutOnAlreadyFinishedWork`, so this ended up being not that
difficult to implement correctly.
There are some exceptions: Suspense and Offscreen. Those are special
because they sometimes defer the rendering of their children to a
completely separate render cycle. In those cases, we must take extra
care to propagate *all* the context changes, not just the first one.
I'm pleasantly surprised at how little I needed to change in this
initial implementation. I was worried I'd have to use the reconciler
fork, but I ended up being able to wrap all my changes in a regular
feature flag. So, we could run an experiment in parallel to our other
ones.
I do consider this a risky rollout overall because of the potential for
subtle semantic deviations. However, the model is simple enough that I
don't expect us to have trouble fixing regressions if or when they arise
during internal dogfooding.
---
This is largely based on [RFC#118](https://github.com/reactjs/rfcs/pull/118),
by @gnoff. I did deviate in some of the implementation details, though.
The main one is how I chose to track context changes. Instead of storing
a dirty flag on the stack, I added a `memoizedValue` field to the
context dependency object. Then, to check if something has changed, the
consumer compares the new context value to the old (memoized) one.
This is necessary because of Suspense and Offscreen — those components
defer work from one render into a later one. When the subtree continues
rendering, the stack from the previous render is no longer available.
But the memoized values on the dependencies list are. This requires a
bit more work when a consumer bails out, but nothing considerable, and
there are ways we could optimize it even further. Conceptually, this
model is really appealing, since it matches how our other features
"reactively" detect changes — `useMemo`, `useEffect`,
`getDerivedStateFromProps`, the built-in cache, and so on.
I also intentionally dropped support for
`unstable_calculateChangedBits`. We're planning to remove this API
anyway before the next major release, in favor of context selectors.
It's an unstable feature that we never advertised; I don't think it's
seen much adoption.
Co-Authored-By: Josh Story <jcs.gnoff@gmail.com>
* Propagate all contexts in single pass
Instead of propagating the tree once per changed context, we can check
all the contexts in a single propagation. This inverts the two loops so
that the faster loop (O(numberOfContexts)) is inside the more expensive
loop (O(numberOfFibers * avgContextDepsPerFiber)).
This adds a bit of overhead to the case where only a single context
changes because you have to unwrap the context from the array. I'm also
unsure if this will hurt cache locality.
Co-Authored-By: Josh Story <jcs.gnoff@gmail.com>
* Stop propagating at nearest dependency match
Because we now propagate all context providers in a single traversal, we
can defer context propagation to a subtree without losing information
about which context providers we're deferring — it's all of them.
Theoretically, this is a big optimization because it means we'll never
propagate to any tree that has work scheduled on it, nor will we ever
propagate the same tree twice.
There's an awkward case related to bailing out of the siblings of a
context consumer. Because those siblings don't bail out until after
they've already entered the begin phase, we have to do extra work to
make sure they don't unecessarily propagate context again. We could
avoid this by adding an earlier bailout for sibling nodes, something
we've discussed in the past. We should consider this during the next
refactor of the fiber tree structure.
Co-Authored-By: Josh Story <jcs.gnoff@gmail.com>
* Mark trees that need propagation in readContext
Instead of storing matched context consumers in a Set, we can mark
when a consumer receives an update inside `readContext`.
I hesistated to put anything in this function because it's such a hot
path, but so are bail outs. Fortunately, we only need to set this flag
once, the first time a context is read. So I think it's a reasonable
trade off.
In exchange, propagation is faster because we no longer need to
accumulate a Set of matched consumers, and fiber bailouts are faster
because we don't need to consult that Set. And the code is simpler.
Co-authored-by: Josh Story <jcs.gnoff@gmail.com>
With this change, if a node is a Fabric node, we route the setJSResponder call to FabricUIManager. Native counterpart is already landed. Tested internally as D26241364.
* Move direct port access into a function
* Fork based on presence of setImmediate
* Copy SchedulerDOM-test into another file
* Change the new test to use shimmed setImmediate
* Clarify comment
* Fix test to work with existing feature detection
* Add flags
* Disable OSS flag and skip tests
* Use VARIANT to reenable tests
* lol
When running the publish workflow, either via the command line or
via the daily cron job, we should use a constant SHA instead of
whatever happens to be at the head of the main branch at the time the
workflow is run.
The difference is subtle: currently, the SHA is read at runtime,
each time the workflow is run. With this change, the SHA is read right
before the workflow is created and passed in as a constant parameter.
In practical terms, this means if a workflow is re-run via the CircleCI
web UI, it will always re-run using the same commit SHA as the original
workflow, instead of fetching the latest SHA from GitHub, which may
have changed.
Also avoids a race condition where the head SHA changes in between the
Next publish job and the Experimental publish job.
* Add `supportsMicrotasks` to the host config
Only certain renderers support scheduling a microtask, so we need a
renderer specific flag that we can toggle. That way it's off for some
renderers and on for others.
I copied the approach we use for the other optional parts of the host
config, like persistent mode and test selectors.
Why isn't the feature flag sufficient?
The feature flag modules, confusingly, are not renderer-specific, at
least when running the our tests against the source files. They are
meant to correspond to a release channel, not a renderer, but we got
confused at some point and haven't cleaned it up.
For example, when we run `yarn test`, Jest loads the flags from the
default `ReactFeatureFlags.js` module, even when we import the React
Native renderer — but in the actual builds, we load a different feature
flag module, `ReactFeatureFlags.native-oss.js.` There's no way in our
current Jest load a different host config for each renderer, because
they all just import the same module. We should solve this by creating
separate Jest project for each renderer, so that the flags loaded when
running against source are the same ones that we use in the
compiled bundles.
The feature flag (`enableDiscreteMicrotasks`) still exists — it's used
to set the React DOM host config's `supportsMicrotasks` flag to `true`.
(Same for React Noop) The important part is that turning on the feature
flag does *not* affect the other renderers, like React Native.
The host config will likely outlive the feature flag, too, since the
feature flag only exists so we can gradually roll it out and measure the
impact in production; once we do, we'll remove it. Whereas the host
config flag may continue to be used to disable the discrete microtask
behavior for RN, because RN will likely use a native (non-JavaScript)
API to schedule its tasks.
* Add `supportsMicrotask` to react-reconciler README
* Parallelize Flow in CI
We added more host configs recently, and we run all the checks in
sequence, so sometimes Flow ends up being the slowest CI job.
This splits the job across multiple processes.
* Fix environment variable typo
Co-authored-by: Ricky <rickhanlonii@gmail.com>
Co-authored-by: Ricky <rickhanlonii@gmail.com>
PR #20728 added a command to initiate a prerelease using CI, but it left
the publish job unimplemented. This fills in the publish job.
Uses an npm automation token for authorization, which bypasses the need
for a one-time password. The token is configured via CircleCI's
environment variable panel.
Currently, it will always publish the head of the main branch. If the
head has already been published, it will exit gracefully.
It does not yet support publishing arbitrary commits, though we could
easily add that. I don't know how important that use case is, because
for PR branches, you can use CodeSandbox CI instead. Or as a last
resort, run the publish script locally.
Always publishing from main is nice because it further incentivizes us
to keep main in a releasable state. It also takes the guesswork out of
publishing a prerelease that's in a broken state: as long as we don't
merge broken PRs, we're fine.