Legacy Suspense is weird. We intentionally commit a suspended fiber in
an inconsistent state. If the fiber suspended before it mounted any
effects, then the fiber won't have a PassiveStatic effect flag, which
will trigger the "missing expected subtreeFlag" warning.
To avoid the false positive, we'd need to mark fibers that commit in an
incomplete state, somehow. For now I'll disable the warning in legacy
mode, with the assumption that most of the bugs that would trigger it
are either exclusive to concurrent mode or exist in both.
This reverts commit 8ed0c85bf174ce6e501be62d9ccec1889bbdbce1.
The host tree is a cyclical structure. Leaking a single DOM node can
retain a large amount of memory. React-managed DOM nodes also point
back to a fiber tree.
Perf testing suggests that disconnecting these fields has a big memory
impact. That suggests leaks in non-React code but since it's hard to
completely eliminate those, it may still be worth the extra work to
clear these fields.
I'm moving this to level 2 to confirm whether this alone is responsible
for the memory savings, or if there are other fields that are retaining
large amounts of memory.
In our plan for removing the alternate model, DOM nodes would not be
connected to fibers, except at the root of the whole tree, which is
easy to disconnect on deletion. So in that world, we likely won't have
to do any additional work.
This reverts commit 0e3c7e1d62efb6238b69e5295d45b9bd2dcf9181.
When called from inside an effect, flushSync cannot synchronously flush
its updates because React is already working. So we fire a warning.
However, we should still change the priority of the updates to sync so
that they flush at the end of the current task.
This only affects useEffect because updates inside useLayoutEffect (and
the rest of the commit phase, like ref callbacks) are already sync.
This reverts commit 2e7aceeb5c8b6e5c61174c0e9731e263e956e445.
If a discrete render results in passive effects, we should flush them
synchronously at the end of the current task so that the result is
immediately observable. For example, if a passive effect adds an event
listener, the listener will be added before the next input.
We don't need to do this for effects that don't have discrete/sync
priority, because we assume they are not order-dependent and do not
need to be observed by external systems.
For legacy mode, we will maintain the existing behavior, since it hasn't
been reported as an issue, and we'd have to do additional work to
distinguish "legacy default sync" from "discrete sync" to prevent all
passive effects from being treated this way.
This reverts commit faa1e127f1ba755da846bc6ce299cdefaf97721f.
* Support nesting of startTransition and flushSync
* Unset transition before entering any special execution contexts
Co-authored-by: Andrew Clark <git@andrewclark.io>
Previously, DevTools recursed over both children and siblings during mount. This caused potential stack overflows when there were a lot of children (e.g. a list containing many items).
Given the following example component tree:
A
B C D
E F
G
A method that recurses for every child and sibling leads to a max depth of 6:
A
A -> B
A -> B -> E
A -> B -> C
A -> B -> C -> D
A -> B -> C -> D -> F
A -> B -> C -> D -> F -> G
The stack gets deeper as the tree gets either deeper or wider.
A method that recurses for every child and iterates over siblings leads to a max depth of 4:
A
A -> B
A -> B -> E
A -> C
A -> D
A -> D -> F
A -> D -> F -> G
The stack gets deeper as the tree gets deeper but is resilient to wide trees (e.g. lists containing many items).
Add an explicit Bridge protocol version to the frontend and backend components as well as a check during initialization to ensure that both are compatible. If not, the frontend will display either upgrade or downgrade instructions.
Note that only the `react-devtools-core` (React Native) and `react-devtools-inline` (Code Sandbox) packages implement this check. Browser extensions inject their own backend and so the check is unnecessary. (Arguably the `react-devtools-inline` check is also unlikely to be necessary _but_ has been added as an extra guard for use cases such as Replay.io.)
We have a feature called "expiration" whose purpose is to prevent
a concurrent update from being starved by higher priority events.
If a lane is CPU-bound for too long, we finish the rest of the work
synchronously without allowing further interruptions.
In the current implementation, we do this in sort of a roundabout way:
once a lane is determined to have expired, we entangle it with SyncLane
and switch to the synchronous work loop.
There are a few flaws with the approach. One is that SyncLane has a
particular semantic meaning besides its non-yieldiness. For example,
`flushSync` will force remaining Sync work to finish; currently, that
also includes expired work, which isn't an intended behavior, but rather
an artifact of the implementation.
An event worse example is that passive effects triggered by a Sync
update are flushed synchronously, before paint, so that its result
is guaranteed to be observed by the next discrete event. But expired
work has no such requirement: we're flushing expired effects before
paint unnecessarily.
Aside from the behaviorial implications, the current implementation has
proven to be fragile: more than once, we've accidentally regressed
performance due to a subtle change in how expiration is handled.
This PR aims to radically simplify how we model starvation protection by
scaling back the implementation as much as possible. In this new model,
if a lane is expired, we disable time slicing. That's it. We don't
entangle it with SyncLane. The only thing we do is skip the call to
`shouldYield` in between each time slice. This is identical to how we
model synchronous-by-default updates in React 18.
Typically we don't need to restore the context here because we assume that
we'll terminate the rest of the subtree so we don't need the correct
context since we're not rendering any siblings.
However, after a nested suspense boundary we need to restore the context.
The boundary could do this but since we're already doing this in the
suspense branch of renderNode, we might as well do it in the error case
which isn't very perf sensitive anyway.
The outermost `batchedUpdates` call flushes pending sync updates at the
end. This was intended for legacy sync mode, but it also happens to
flush discrete updates in concurrent mode.
Instead, we should only flush sync updates at the end of
`batchedUpdates` for legacy roots. Discrete sync updates can wait to
flush in the microtask.
`discreteUpdates` has the same issue, which is how I originally noticed
this, but I'll change that one in a separate commit since it requires
updating a few (no longer relevant) internal tests.
Even when updates are sync by default.
Discovered this quirk while working on #21322. Previously, when sync
default updates are enabled, continuous updates are treated like
default updates. We implemented this by assigning DefaultLane to
continous updates. However, an unintended consequence of that approach
is that continuous updates would no longer interrupt transitions,
because default updates are not supposed to interrupt transitions.
To fix this, I changed the implementation to always assign separate
lanes for default and continuous updates. Then I entangle the
lanes together.
Instead of `performSyncWorkOnRoot`.
The conceptual model is that the only difference between sync default
updates (in React 18) and concurrent default updates (in a future major
release) is time slicing. All other behavior should be the same
(i.e. the stuff in `finishConcurrentRender`).
Given this, I think it makes more sense to model the implementation this
way, too. This exposed a quirk in the previous implementation where
non-sync work was sometimes mistaken for sync work and flushed too
early. In the new implementation, `performSyncWorkOnRoot` is only used
for truly synchronous renders (i.e. `SyncLane`), which should make these
mistakes less common.
Fixes most of the tests marked with TODOs from #21072.
* Warn if `finishedLanes` is empty in commit phase
See #21233 for context.
* Fix sloppy factoring when assigning finishedLanes
`finishedLanes` is assigned in `performSyncWorkOnRoot` and
`performSyncWorkOnRoot`. It's meant to represent whichever lanes we
used to render, but because of some sloppy factoring, it can sometimes
equal `NoLanes`.
The fixes are:
- Always check if the lanes are not `NoLanes` before entering the work
loop. There was a branch where this wasn't always true.
- In `performSyncWorkOnRoot`, don't assume the next lanes are sync; the
priority may have changed, or they may have been flushed by a
previous task.
- Don't re-assign the `lanes` variable (the one that gets assigned to
`finishedLanes` until right before we enter the work loop, so that it
is always corresponds to the newest complete root.
We don't need this anymore because we flush in a microtask.
This should allow us to remove the logic in the event system that
tracks nested event dispatches.
I added a test to confirm that nested event dispatches don't triggger
a synchronous flush, like they would if we wrapped them `flushSync`. It
already passed; I added it to prevent a regression.
Retries should be allowed to expire if they are CPU bound for too long,
but when I made this change it caused a spike in browser crashes. There
must be some other underlying bug; not super urgent but ideally should
figure out why and fix it. Unfortunately we don't have a repro for the
crashes, only detected via production metrics.