In DEV, we need to prevent the response from being GC'd while there are
still pending chunks for ReadableSteams or pending results for
AsyncIterables.
Co-authored-by: Sebastian "Sebbie" Silbermann <silbermann.sebastian@gmail.com>
Fix for the repro from the previous PR. A `Capture x -> y` effect should
downgrade to `ImmutableCapture` when the source value is maybe-frozen.
MaybeFrozen represents the union of a frozen value with a non-frozen
value.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35140).
* __->__ #35140
* #35139
## Summary
Fixes#35040. The React compiler incorrectly flags ref access within
event handlers as ref access at render time. For example, this code
would fail to compile with error "Cannot access refs during render":
```tsx
const onSubmit = async (data) => {
const file = ref.current?.toFile(); // Incorrectly flagged as error
};
<form onSubmit={handleSubmit(onSubmit)}>
```
This is a false positive because any built-in DOM event handler is
guaranteed not to run at render time. This PR only supports built-in
event handlers because there are no guarantees that user-made event
handlers will not run at render time.
## How did you test this change?
I created 4 test fixtures which validate this change:
* allow-ref-access-in-event-handler-wrapper.tsx - Sync handler test
input
* allow-ref-access-in-event-handler-wrapper.expect.md - Sync handler
expected output
* allow-ref-access-in-async-event-handler-wrapper.tsx - Async handler
test input
* allow-ref-access-in-async-event-handler-wrapper.expect.md - Async
handler expected output
All linters and test suites also pass.
Summary:
This only matters when enableTreatSetIdentifiersAsStateSetters=true
This pattern is still bad. But Right now the validation can only
recommend to move stuff to "calculate in render"
A global setState should not be moved to render, not even conditionally
and you can't remove state without crossing Component boundaries, which
makes this a different kind of fix.
So while we are only suggesting "calculate in render" as a fix we should
disallow the lint from throwing in this case IMO
Test Plan:
Added a fixture
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35135).
* __->__ #35135
* #35134
Summary:
The validation only allows setState declaration as a usage outside of
the effect.
Another edge case is that if you add the setState being validated in the
dependency array you also make the validation opt out since it counts as
a usage outside of the effect.
Added a bit of logic to consider the effect's deps when creating the
cache for setState usages within the effect
Test Plan:
Added a fixture
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35134).
* #35135
* __->__ #35134
This PR fixes a critical bug where `ReadableStream({type: 'bytes'})`
instances passed through React Server Components (RSC) would stall after
reading only the first chunk or the first few chunks in the client. This
issue was masked by using `web-streams-polyfill` in tests, but manifests
with native Web Streams implementations.
The root cause is that when a chunk is enqueued to a
`ReadableByteStreamController`, the spec requires the underlying
ArrayBuffer to be synchronously transferred/detached. In the React
Flight Client's chunk parsing, embedded byte stream chunks are created
as views into the incoming RSC stream chunk buffer using `new
Uint8Array(chunk.buffer, offset, length)`. When embedded byte stream
chunks are enqueued, they can detach the shared buffer, leaving the RSC
stream parsing in a broken state.
The fix is to copy embedded byte stream chunks before enqueueing them,
preventing buffer detachment from affecting subsequent parsing. To not
affect performance too much, we use a zero-copy optimization: when a
chunk ends exactly at the end of the RSC stream chunk, or when the row
spans into the next RSC chunk, no further parsing will access that
buffer, so we can safely enqueue the view directly without copying.
We now also enqueue embedded byte stream chunks immediately as they are
parsed, without waiting for the full row to complete.
To simplify the logic in the client, we introduce a new `'b'` protocol
tag specifically for byte stream chunks. The server now emits `'b'`
instead of `'o'` for `Uint8Array` chunks from byte streams (detected via
`supportsBYOB`). This allows the client to recognize byte stream chunks
without needing to track stream IDs.
Tests now use the proper Jest environment with native Web Streams
instead of polyfills, exposing and validating the fix for this issue.
@josephsavona this was briefly discussed in an old thread, lmk your
thoughts on the approach. I have some fixes ready as well but wanted to
get this test case in first... there's some things I don't _love_ about
this approach, but end of the day it's just a tool for the test suite
rather than something for end user folks so even if it does a 70% good
enough job that's fine.
### refresher on the problem
when we generate coverage reports with jest (istanbul), our coverage
ends up completely out of whack due to the AST missing a ton of (let's
call them "important") source locations after the compiler pipeline has
run.
At the moment to get around this, we've been doing something a bit
unorthodox and also running our test suite with istanbul running before
the compiler -- which results in its own set of issues (for eg, things
being memoized differently, or the compiler completely bailing out on
the instrumented code, etc).
before getting in fixes, I wanted to set up a test case to start
chipping away on as you had recommended.
### how it works
The validator basically:
1. Traverses the original AST and collects the source locations for some
"important" node types
- (excludes useMemo/useCallback calls, as those are stripped out by the
compiler)
3. Traverses the generated AST and looks for nodes with matching source
locations.
4. Generates errors for source locations missing nodes in the generated
AST
### caveats/drawbacks
There are some things that don't work super well with this approach. A
more natural test fit I think would be just having some explicit
assertions made against an AST in a test file, as you can just bake all
of the assumptions/nuance in there that are difficult to handle in a
generic manner. However, this is maybe "good enough" for now.
1. Have to be careful what you put into the test fixture. If you put in
some code that the compiler just removes (for eg, a variable assignment
that is unused), you're creating a failure case that's impossible to
fix. I added a skip for useMemo/useCallback.
2. "Important" locations must exactly match for validation to pass.
- Might get tricky making sure things are mapped correctly when a node
type is completely changed, for eg, when a block statement arrow
function body gets turned into an implicit return via the body just
being an expression/identifier.
- This can/could result in scenarios where more changes are needed to
shuttle the locations through due to HIR not having a 1:1 mapping all
the babel nuances, even if some combination of other data might be good
enough even if not 10000% accurate. This might be the _right_ thing
anyways so we don't end up with edge cases having incorrect source
locations.
Summary:
We should only run one version of the validation. I think it makes sense
that if the exp version is enable it takes precedence over the stable
one
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35099).
* __->__ #35099
* #35100
Summary:
I missed this test case failing and now having @loggerTestOnly after
landing some other PRs good to know they're not land blocking
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35100).
* #35099
* __->__ #35100
Summary:
When a local state is created sometimes it uses a `prop` or even other
local state for its initial value.
This value is only relevant on first render so we shouldn't consider it
part of our data flow
Test Plan:
Added tests
Summary:
If we are using a clean up function in an effect and that clean up
function depends on a value that is used to set the state we are
validating for we shouldn't throw an error since it is a valid use case
for an effect.
Test Plan:
added test
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35020).
* #35044
* __->__ #35020
Summary:
This makes the setState usage logic much more robust. We no longer rely
on identifierName.
Now we track when a setState is loaded into a new promoted identifier
variable and track this in a map `setStateLoaded` map.
For other types of instructions we consider the setState to be being
used. In this case we record its usage into the `setStateUsages` map.
Test Plan:
We expect no changes in behavior for the current tests
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34973).
* #35044
* #35020
* __->__ #34973
* #34972
Summary:
Revamped the derivationCache graph.
This fixes a bunch of bugs where sometimes we fail to track from which
props/state we derived values from.
Also, it is more intuitive and allows us to easily implement a Data Flow
Tree.
We can print this tree which gives insight on how the data is derived
and should facilitate error resolution in complicated components
Test Plan:
Added a test case where we were failing to track derivations. Also
updated the test cases with the new error containing the data flow tree
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34995).
* #35044
* #35020
* #34973
* #34972
* __->__ #34995
* #34967
Summary:
With this we are now comparing a snapshot of the derivationCache with
the new changes every time we are done recording the derivations
happening in the HIR.
We have to do this after recording everything since we still do some
mutations on the cache when recording mutations.
Test Plan:
Test the following in playground:
```
// @validateNoDerivedComputationsInEffects_exp
function Component({ value }) {
const [checked, setChecked] = useState('');
useEffect(() => {
setChecked(value === '' ? [] : value.split(','));
}, [value]);
return (
<div>{checked}</div>
)
}
```
This no longer causes an infinite loop.
Added a test case in the next PR in the stack
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34967).
* #35044
* #35020
* #34973
* #34972
* #34995
* __->__ #34967
This PR updates the behavior of Activity so that when it is hidden, it
hides the contents of any portals contained within it.
Previously we had intentionally chosen not to implement this behavior,
because it was thought that this concern should be left to the userspace
code that manages the portal, e.g. by adding or removing the portal
container from the DOM. Depending on the use case for the portal, this
is often desirable anyway because the portal container itself is not
controlled by React.
However, React does own the _contents_ of the portal, and we can hide
those elements regardless of what the user chooses to do with the
container. This makes the hiding/unhiding behavior of portals with
Activity automatic in the majority of cases, and also benefits from
aligning the DOM mutations with the rest of the React's commit phase
lifecycle.
The reason we have to special case this at all is because usually we
only hide the direct DOM children of the Activity boundary. There's no
reason to go deeper than that, because hiding a parent DOM element
effectively hides everything inside of it. Portals are the exception,
because they don't exist in the normal DOM hierarchy; we can't assume
that just because a portal has a parent in the React tree that it will
also have that parent in the actual DOM.
So, whenever an Activity boundary is hidden, we must search for and hide
_any_ portal that is contained within it, and recursively hide its
direct children, too.
To optimize this search, we use a new subtree flag, PortalStatic, that
is set only on fiber paths that contain a HostPortal. This lets us skip
over any subtree that does not contain a portal.
When I moved the outline to above all other rects, I thought it was
clever to unify with the root so that the outline was also used for the
root selection. But the root outline is not drawn like the other rects.
It's outside the padding and doesn't have the 1px adjustment which leads
the overlay to be slightly inside the other rect instead of above it.
This goes back to just having the selected root be drawn by the root
element.
Before:
<img width="652" height="253" alt="Screenshot 2025-11-07 at 11 39 28 AM"
src="https://github.com/user-attachments/assets/334237d1-f190-4995-94cc-9690ec0f7ce1"
/>
After:
<img width="674" height="220" alt="Screenshot 2025-11-07 at 11 44 01 AM"
src="https://github.com/user-attachments/assets/afaa86d8-942a-44d8-a1a5-67c7fb642c0d"
/>
If an error is thrown inside a hidden Activity, it should not escape
into the visible part of the UI. Conceptually, a hidden Activity
boundary is not part of the current UI; it's the same as an unmounted
tree, except for the fact that the state will be restored if it's later
revealed.
Fixes:
- https://github.com/facebook/react/issues/35073
## Summary
This PR upgrades the dependency on update-notifier, used in
react-devtools, to 5.x
This is the latest non-ESM version, so upgrading to it should be
unproblematic (while updating to 6.x and beyond will have to wait).
Upgrading means we avoid installing a lot of outdated dependencies (as
can be seen from the diff in yarn.lock), and resolves part of
https://github.com/facebook/react/issues/28058
Changelog:
https://github.com/yeoman/update-notifier/releases
The most relevant breaking change seems to be that the minimum support
node version is increased from v6 to v10, but I couldn't find what is
currently React's official node version support.
## How did you test this change?
I ran the test-suite locally (`yarn test` in root folder), but I'm not
sure if that one actually covers devtools?
I also built and tested this version of devtools with some internal
company projects (both react and react-native based) – following
guidelines from
https://github.com/facebook/react/issues/28058#issuecomment-1943619292.
I need to regain a field because the SuspenseBoundary type is already at
16 fields in prod, after which it deopts v8.
There are two fields that are only used in prerender to track postpones.
These are ripe to be split into an optional object so that they only
take up one field when they're not used.
We already append `randomKey` to each handle name to prevent external
libraries from accessing and relying on these internals. But more
libraries recently have been getting around this by simply iterating
over the element properties and using a `startsWith` check.
This flag allows us to experiment with moving these handles to an
internal map.
This PR starts with the two most common internals, the props object and
the fiber. We can consider moving additional properties such as the
container root and others depending on perf results.
Also, don't not skip hidden trees.
Memoized state is null when an Offscreen boundary (Suspense or Activity)
is visible.
This logic was inversed in a couple of View Transition checks which
caused pairs to be discovered or not discovered incorrectly for
insertion and deletion of Suspense or Activity boundaries.
This is an alternative to #35059.
If the name needs escaping, then instead of escaping it, we just use a
base64 name. This wouldn't allow you to match on an escaped name in your
own CSS like you should be able to if browsers worked properly. But at
least it would provide matching name in current browsers which is
probably sufficient if you're using auto-generated names.
This also covers some cases where `CSS.escape()` isn't sufficient anyway
like when the name ends in a dot.
Follow up to #35022.
It's now replaced by the `defer` option.
Sounds like nobody is actually using this option, including Meta, so we
can just delete it.
We've long had the CPU suspense feature behind a flag under the terrible
API `unstable_expectedLoadTime={arbitraryNumber}`. We've known for a
long time we want it to just be `defer={true}` (or just `<Suspense
defer>` in the short hand syntax). So this adds the new name and warns
for the old name.
For only the new name, I also implemented SSR semantics in Fizz. It has
two effects here.
1) It renders the fallback before the content (similar to prerender)
allowing siblings to complete quicker.
2) It always outlines the result. When streaming this should really
happen naturally but if you defer a prerendered content it also implies
that it's expensive and should be outlined. It gives you a opt-in to
outlining similar to suspensey images and css but let you control it
manually.
I don't think we're ready to land this yet since we're using it to run
other experiments and our tests. I'm opening this PR to indicate intent
to disable and to ensure tests in other combinations still work. Such as
enableHalt without enablePostpone. I think we'll also need to rewrite
some tests that depend on enablePostpone to preserve some coverage.
The conclusion after this experiment is that try/catch around these are
too likely to block these signals and consider them error. Throwing
works for Hooks and `use()` because the lint rule can ensure that
they're not wrapped in try/catch. Throwing in arbitrary functions not
quite ecosystem compatible. It's also why there's `use()` and not just
throwing a Promise. This might also affect the Catch proposal.
The "prerender" for SSR that's supporting "Partial Prerendering" is
still there. This just disables the `React.postpone()` API for creating
the holes.
Normally if you suspend in a SuspenseList row above a Suspense boundary
in that row, it'll suspend the parent. Which can itself delay the commit
or resuspend a parent boundary. That's because SuspenseList mostly just
coordinates the state of the inner boundaries and isn't a boundary
itself.
However, for tail "hidden" and "collapsed" this is not quite the case
because the rows themselves can avoid being rendered.
In the case of "collapsed" we require at least one Suspense boundary
above to have successfully rendered before committing the list because
the idea of this mode is that you should at least always show some
indicator that things are still loading. Since we'd never try the next
one after that at all, this just works. Expect there was an unrelated
bug that meant that "suspend with delay" on a Retry didn't suspend the
commit. This caused a scenario were it'd allow a commit proceed when it
shouldn't. So I fixed that too. The counter intuitive thing here is that
we won't actually show a previous completed row if the loading state of
the next row is still loading.
For tail "hidden" it's a little different because we don't actually
require any loading indicator at all to be shown while it's loading. If
we attempt a row and it suspends, we can just hide it (and the rest) and
move to commit. Therefore this implements a path where if all the rest
of the tail are new mounts (we wouldn't be required to unmount any
existing boundaries) then we can treat the SuspenseList boundary itself
as "catching" the suspense. This is more coherent semantics since any
future row that we didn't attempt also wouldn't resuspend the parent.
This allows simple cases like `<SuspenseList>{list}</SuspenseList>` to
stream in each row without any indicator and no need for Suspense
boundaries.
We were not recording uEE calls in component/hook syntax. Easy fix.
Added tests matching function component syntax for component syntax +
added one for hooks
For Edge Flight servers, that use Web Streams, we're defining the
`debugChannel` option as:
```
debugChannel?: {readable?: ReadableStream, writable?: WritableStream, ...}
```
Whereas for Node.js Flight servers, that use Node.js Streams, we're
defining it as:
```
debugChannel?: Readable | Writable | Duplex | WebSocket
```
For the Edge Flight clients, there is currently only one direction of
the debug channel supported, so we define the option as:
```
debugChannel?: {readable?: ReadableStream, ...}
```
Consequently, for the Node.js Flight clients, we define the option as:
```
debugChannel?: Readable
```
The presence of a readable debug channel is passed to the Flight client
internally via the `hasReadable` flag on the internal `debugChannel`
option. For the Node.js clients, that flag was accidentally derived from
the public option `debugChannel.readable`, which is conceptually
incorrect, because `debugChannel` is a `Readable` stream, not an options
object with a `readable` property. However, a `Readable` also has a
`readable` property, which is a boolean that indicates whether the
stream is in a readable state. This meant that the `hasReadable` flag
was incidentally still set correctly. Regardless, this was confusing and
unintentional, so we're now fixing it to always set `hasReadable` to
`true` when a `debugChannel` is provided to the Node.js clients. We'll
revisit this in case we ever add support for writable debug channels in
Node.js (and Edge) clients.
This PR adds a `unstable_reactFragments?: Set<FragmentInstance>`
property to DOM nodes that belong to a Fragment with a ref (top level
host components). This allows you to access a FragmentInstance from a
DOM node.
This is flagged behind `enableFragmentRefsInstanceHandles`.
The primary use case to unblock is reusing IntersectionObserver
instances. A fairly common practice is to cache and reuse
IntersectionObservers that share the same config, with a map of
node->callbacks to run for each entry in the IO callback. Currently this
is not possible with Fragment Ref `observeUsing` because the key in the
cache would have to be the `FragmentInstance` and you can't find it
without a handle from the node. This works now by accessing
`entry.target.fragments`.
This also opens up possibilities to use `FragmentInstance` operations in
other places, such as events. We can do
`event.target.unstable_reactFragments`, then access
`fragmentInstance.getClientRects` for example. In a future PR, we can
assign an event's `currentTarget` as the Fragment Ref for a more direct
handle when the event has been dispatched by the Fragment itself.
The first commit here implemented a handle only on observed elements.
This is awkward because there isn't a good way to document or expose
this temporary property. `element.fragments` is closer to what we would
expect from a DOM API if a standard was implemented here. And by
assigning it to all top-level nodes of a Fragment, it can be used beyond
the cached IntersectionObserver callback.
One tradeoff here is adding extra work during the creation of
FragmentInstances as well as keeping track of adding/removing nodes.
Previously we only track the Fiber on creation but here we add a
traversal which could apply to a large set of top-level host children.
The `element.unstable_reactFragments` Set can also be randomly ordered.
In #35019, we excluded debug I/O info from being considered for
enhancing the owner stack if it resolved after the defined `endTime`
option that can be passed to the Flight client. However, we should
include any I/O that was awaited before that end time, even if it
resolved later.