Similar to ValidateHookUsage, we implement this check in the compiler
for safety but (for now) continue to rely on the existing rule for
actually reporting errors to users.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35192).
* #35201
* #35202
* __->__ #35192
The existing exhaustive-deps rule allows omitting non-reactive
dependencies, even if they're not memoized. Conceptually, if a value is
non-reactive then it cannot semantically change. Even if the value is a
new object, that object represents the exact same value and doesn't
necessitate redoing downstream computation. Thus its fine to exclude
nonreactive dependencies, whether they're a stable type or not.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35190).
* #35201
* #35202
* #35192
* __->__ #35190
Since adding this validation we've already changed our inference to use
knowledge from manual memoization to inform when values are frozen and
which values are non-nullable. To align with that, if the user chooses
to use different optionality btw the deps and the memo block/callback,
that's fine. The key is that eg `x?.y` will invalidate whenever `x.y`
would, so from a memoization correctness perspective its fine. It's not
our job to be a type checker: if a value is potentially nullable, it
should likely use a nullable property access in both places but
TypeScript/Flow can check that.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35186).
* #35201
* #35202
* #35192
* #35190
* __->__ #35186
When checking ValidateExhaustiveDeps internally, this seems to be the
most common case that it flags. The current exhaustive-deps rule allows
extraneous deps if they are a set of stable types. So here we reuse our
existing isStableType() util in the compiler to allow this case.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35185).
* #35201
* #35202
* #35192
* #35190
* #35186
* __->__ #35185
With `ValidateExhaustiveMemoDependencies` we can now check exhaustive
dependencies for useMemo and useCallback within the compiler, without
relying on the separate exhaustive-deps rule. Until now we've bailed out
of any component/hook that suppresses this rule, since the suppression
_might_ affect a memoization value. Compiling code with incorrect memo
deps can change behavior so this wasn't safe. The downside was that a
suppression within a useEffect could prevent memoization, even though
non-exhaustive deps for effects do not cause problems for memoization
specifically.
So here, we change to ignore ESLint suppressions if we have both the
compiler's hooks validation and memo deps validations enabled.
Now we just have to test out the new validation and refine before we can
enable this by default.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35184).
* #35201
* #35202
* #35192
* #35190
* #35186
* #35185
* __->__ #35184
Records more information in DropManualMemoization so that we know the
full span of the manual dependencies array (if present). This allows
ValidateExhaustiveDeps to include a suggestion with the correct deps.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34471).
* #34472
* __->__ #34471
The compiler currently drops manual memoization and rewrites it using
its own inference. If the existing manual memo dependencies has missing
or extra dependencies, compilation can change behavior by running the
computation more often (if deps were missing) or less often (if there
were extra deps). We currently address this by relying on the developer
to use the ESLint plugin and have `eslint-disable-next-line
react-hooks/exhaustive-deps` suppressions in their code. If a
suppression exists, we skip compilation.
But not everyone is using the linter! Relying on the linter is also
imprecise since it forces us to bail out on exhaustive-deps checks that
only effect (ahem) effects — and while it isn't good to have incorrect
deps on effects, it isn't a problem for compilation.
So this PR is a rough sketch of validating manual memoization
dependencies in the compiler. Long-term we could use this to also check
effect deps and replace the ExhaustiveDeps lint rule, but for now I'm
focused specifically on manual memoization use-cases. If this works, we
can stop bailing out on ESLint suppressions, since the compiler will
implement all the appropriate checks (we already check rules of hooks).
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34394).
* #34472
* #34471
* __->__ #34394
This deprecates the `noEmit: boolean` flag and adds `outputMode:
'client' | 'client-no-memo' | 'ssr' | 'lint'` as the replacement.
OutputMode defaults to null and takes precedence if specified, otherwise
we use 'client' mode for noEmit=false and 'lint' mode for noEmit=true.
Key points:
* Retrying failed compilation switches from 'client' mode to
'client-no-memo'
* Validations are enabled behind
Environment.proto.shouldEnableValidations, enabled for all modes except
'client-no-memo'. Similar for dropping manual memoization.
* OptimizeSSR is now gated by the outputMode==='ssr', not a feature flag
* Creation of reactive scopes, and related codegen logic, is now gated
by outputMode==='client'
Just a quick poc:
* Inline useState when the initializer is known to not be a function.
The heuristic could be improved but will handle a large number of cases
already.
* Prune effects
* Prune useRef if the ref is unused, by pruning 'ref' props on primitive
components. Then DCE does the rest of the work - with a small change to
allow `useRef()` calls to be dropped since function calls aren't
normally eligible for dropping.
* Prune event handlers, by pruning props whose names start w "on" from
primitive components. Then DCE removes the functions themselves.
Per the fixture, this gets pretty far.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35102).
* #35112
* __->__ #35102
Summary:
I missed this conditional messing things up for undefined useState()
calls. We should be tracking them.
I also missed a test that expect an error was not throwing.
Test Plan:
Update broken test
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35174).
* __->__ #35174
* #35173
Summary:
The operands of a function expression are the elements passed as
context. This means that it doesn't make sense to record mutations for
them.
The relevant mutations will happen in the function body, so we need to
prevent FunctionExpression type instruction from running the logic for
effect mutations.
This was also causing some values to depend on themselves in some cases
triggering an infinite loop. Also added n invariant to prevent this
issue
Test Plan:
Added fixture test
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35173).
* #35174
* __->__ #35173
When dealing with optimistic state, a common problem is not knowing the
id of the thing we're waiting on. Items in lists need keys (and single
items should often have keys too to reset their state). As a result you
have to generate fake keys. It's a pain to manage those and when the
real item comes in, you often end up rendering that with a different
`key` which resets the state of the component tree. That in turns works
against the grain of React and a lot of negatives fall out of it.
This adds a special `optimisticKey` symbol that can be used in place of
a `string` key.
```js
import {optimisticKey} from 'react';
...
const [optimisticItems, setOptimisticItems] = useOptimistic([]);
const children = savedItems.concat(
optimisticItems.map(item =>
<Item key={optimisticKey} item={item} />
)
);
return <div>{children}</div>;
```
The semantics of this `optimisticKey` is that the assumption is that the
newly saved item will be rendered in the same slot as the previous
optimistic items. State is transferred into whatever real key ends up in
the same slot.
This might lead to some incorrect transferring of state in some cases
where things don't end up lining up - but it's worth it for simplicity
in many cases since dealing with true matching of optimistic state is
often very complex for something that only lasts a blink of an eye.
If a new item matches a `key` elsewhere in the set, then that's favored
over reconciling against the old slot.
One quirk with the current algorithm is if the `savedItems` has items
removed, then the slots won't line up by index anymore and will be
skewed. We might be able to add something where the optimistic set is
always reconciled against the end. However, it's probably better to just
assume that the set will line up perfectly and otherwise it's just best
effort that can lead to weird artifacts.
An `optimisticKey` will match itself for updates to the same slot, but
it will not match any existing slot that is not an `optimisticKey`. So
it's not an `any`, which I originally called it, because it doesn't
match existing real keys against new optimistic keys. Only one
direction.
I've been trying out LLM agents for compiler development, and one thing
i found is that the agent naturally wants to run `yarn snap <pattern>`
to test a specific fixture, and I want to be able to tell it (directly
or in rules/skills) to do this in order to get the debug output from all
the compiler passes. Agents can figure out our current testfilter.txt
file system but that's just tedious. So here we add support for `yarn
snap -p <pattern>`. If you pass in a pattern with an extension, we
target that extension specifically. If you pass in a .expect.md file, we
look at that specific fixture. And if the pattern doesn't have
extensions, we search for `<pattern>{.js,.jsx,.ts,.tsx}`. When patterns
are enabled we automatically log as in debug mode (if there is a single
match), and disable watch mode.
Open to feedback!
Conditionally calling setState in an effect is sometimes necessary, but
should generally follow the pattern of using a "previous vaue" ref to
manually compare and ensure that the setState is idempotent. See fixture
for an example.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35147).
* #35148
* __->__ #35147
Destructing statements that start off as declarations can end up
becoming reassignments if the variable is a scope declaration, so we
have existing logic to handle cases where some parts of a destructure
need to be converted into new locals, with a reassignment to the hoisted
scope variable afterwards. However, there is an edge case where all of
the values are reassigned, in which case we don't need to rewrite and
can just set the instruction kind to reassign.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35144).
* #35148
* #35147
* #35146
* __->__ #35144
In DEV, we need to prevent the response from being GC'd while there are
still pending chunks for ReadableSteams or pending results for
AsyncIterables.
Co-authored-by: Sebastian "Sebbie" Silbermann <silbermann.sebastian@gmail.com>
Fix for the repro from the previous PR. A `Capture x -> y` effect should
downgrade to `ImmutableCapture` when the source value is maybe-frozen.
MaybeFrozen represents the union of a frozen value with a non-frozen
value.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35140).
* __->__ #35140
* #35139
## Summary
Fixes#35040. The React compiler incorrectly flags ref access within
event handlers as ref access at render time. For example, this code
would fail to compile with error "Cannot access refs during render":
```tsx
const onSubmit = async (data) => {
const file = ref.current?.toFile(); // Incorrectly flagged as error
};
<form onSubmit={handleSubmit(onSubmit)}>
```
This is a false positive because any built-in DOM event handler is
guaranteed not to run at render time. This PR only supports built-in
event handlers because there are no guarantees that user-made event
handlers will not run at render time.
## How did you test this change?
I created 4 test fixtures which validate this change:
* allow-ref-access-in-event-handler-wrapper.tsx - Sync handler test
input
* allow-ref-access-in-event-handler-wrapper.expect.md - Sync handler
expected output
* allow-ref-access-in-async-event-handler-wrapper.tsx - Async handler
test input
* allow-ref-access-in-async-event-handler-wrapper.expect.md - Async
handler expected output
All linters and test suites also pass.
Summary:
This only matters when enableTreatSetIdentifiersAsStateSetters=true
This pattern is still bad. But Right now the validation can only
recommend to move stuff to "calculate in render"
A global setState should not be moved to render, not even conditionally
and you can't remove state without crossing Component boundaries, which
makes this a different kind of fix.
So while we are only suggesting "calculate in render" as a fix we should
disallow the lint from throwing in this case IMO
Test Plan:
Added a fixture
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35135).
* __->__ #35135
* #35134
Summary:
The validation only allows setState declaration as a usage outside of
the effect.
Another edge case is that if you add the setState being validated in the
dependency array you also make the validation opt out since it counts as
a usage outside of the effect.
Added a bit of logic to consider the effect's deps when creating the
cache for setState usages within the effect
Test Plan:
Added a fixture
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35134).
* #35135
* __->__ #35134
This PR fixes a critical bug where `ReadableStream({type: 'bytes'})`
instances passed through React Server Components (RSC) would stall after
reading only the first chunk or the first few chunks in the client. This
issue was masked by using `web-streams-polyfill` in tests, but manifests
with native Web Streams implementations.
The root cause is that when a chunk is enqueued to a
`ReadableByteStreamController`, the spec requires the underlying
ArrayBuffer to be synchronously transferred/detached. In the React
Flight Client's chunk parsing, embedded byte stream chunks are created
as views into the incoming RSC stream chunk buffer using `new
Uint8Array(chunk.buffer, offset, length)`. When embedded byte stream
chunks are enqueued, they can detach the shared buffer, leaving the RSC
stream parsing in a broken state.
The fix is to copy embedded byte stream chunks before enqueueing them,
preventing buffer detachment from affecting subsequent parsing. To not
affect performance too much, we use a zero-copy optimization: when a
chunk ends exactly at the end of the RSC stream chunk, or when the row
spans into the next RSC chunk, no further parsing will access that
buffer, so we can safely enqueue the view directly without copying.
We now also enqueue embedded byte stream chunks immediately as they are
parsed, without waiting for the full row to complete.
To simplify the logic in the client, we introduce a new `'b'` protocol
tag specifically for byte stream chunks. The server now emits `'b'`
instead of `'o'` for `Uint8Array` chunks from byte streams (detected via
`supportsBYOB`). This allows the client to recognize byte stream chunks
without needing to track stream IDs.
Tests now use the proper Jest environment with native Web Streams
instead of polyfills, exposing and validating the fix for this issue.
@josephsavona this was briefly discussed in an old thread, lmk your
thoughts on the approach. I have some fixes ready as well but wanted to
get this test case in first... there's some things I don't _love_ about
this approach, but end of the day it's just a tool for the test suite
rather than something for end user folks so even if it does a 70% good
enough job that's fine.
### refresher on the problem
when we generate coverage reports with jest (istanbul), our coverage
ends up completely out of whack due to the AST missing a ton of (let's
call them "important") source locations after the compiler pipeline has
run.
At the moment to get around this, we've been doing something a bit
unorthodox and also running our test suite with istanbul running before
the compiler -- which results in its own set of issues (for eg, things
being memoized differently, or the compiler completely bailing out on
the instrumented code, etc).
before getting in fixes, I wanted to set up a test case to start
chipping away on as you had recommended.
### how it works
The validator basically:
1. Traverses the original AST and collects the source locations for some
"important" node types
- (excludes useMemo/useCallback calls, as those are stripped out by the
compiler)
3. Traverses the generated AST and looks for nodes with matching source
locations.
4. Generates errors for source locations missing nodes in the generated
AST
### caveats/drawbacks
There are some things that don't work super well with this approach. A
more natural test fit I think would be just having some explicit
assertions made against an AST in a test file, as you can just bake all
of the assumptions/nuance in there that are difficult to handle in a
generic manner. However, this is maybe "good enough" for now.
1. Have to be careful what you put into the test fixture. If you put in
some code that the compiler just removes (for eg, a variable assignment
that is unused), you're creating a failure case that's impossible to
fix. I added a skip for useMemo/useCallback.
2. "Important" locations must exactly match for validation to pass.
- Might get tricky making sure things are mapped correctly when a node
type is completely changed, for eg, when a block statement arrow
function body gets turned into an implicit return via the body just
being an expression/identifier.
- This can/could result in scenarios where more changes are needed to
shuttle the locations through due to HIR not having a 1:1 mapping all
the babel nuances, even if some combination of other data might be good
enough even if not 10000% accurate. This might be the _right_ thing
anyways so we don't end up with edge cases having incorrect source
locations.
Summary:
We should only run one version of the validation. I think it makes sense
that if the exp version is enable it takes precedence over the stable
one
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35099).
* __->__ #35099
* #35100
Summary:
I missed this test case failing and now having @loggerTestOnly after
landing some other PRs good to know they're not land blocking
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35100).
* #35099
* __->__ #35100
Summary:
When a local state is created sometimes it uses a `prop` or even other
local state for its initial value.
This value is only relevant on first render so we shouldn't consider it
part of our data flow
Test Plan:
Added tests
Summary:
If we are using a clean up function in an effect and that clean up
function depends on a value that is used to set the state we are
validating for we shouldn't throw an error since it is a valid use case
for an effect.
Test Plan:
added test
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/35020).
* #35044
* __->__ #35020
Summary:
This makes the setState usage logic much more robust. We no longer rely
on identifierName.
Now we track when a setState is loaded into a new promoted identifier
variable and track this in a map `setStateLoaded` map.
For other types of instructions we consider the setState to be being
used. In this case we record its usage into the `setStateUsages` map.
Test Plan:
We expect no changes in behavior for the current tests
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34973).
* #35044
* #35020
* __->__ #34973
* #34972
Summary:
Revamped the derivationCache graph.
This fixes a bunch of bugs where sometimes we fail to track from which
props/state we derived values from.
Also, it is more intuitive and allows us to easily implement a Data Flow
Tree.
We can print this tree which gives insight on how the data is derived
and should facilitate error resolution in complicated components
Test Plan:
Added a test case where we were failing to track derivations. Also
updated the test cases with the new error containing the data flow tree
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34995).
* #35044
* #35020
* #34973
* #34972
* __->__ #34995
* #34967
Summary:
With this we are now comparing a snapshot of the derivationCache with
the new changes every time we are done recording the derivations
happening in the HIR.
We have to do this after recording everything since we still do some
mutations on the cache when recording mutations.
Test Plan:
Test the following in playground:
```
// @validateNoDerivedComputationsInEffects_exp
function Component({ value }) {
const [checked, setChecked] = useState('');
useEffect(() => {
setChecked(value === '' ? [] : value.split(','));
}, [value]);
return (
<div>{checked}</div>
)
}
```
This no longer causes an infinite loop.
Added a test case in the next PR in the stack
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/34967).
* #35044
* #35020
* #34973
* #34972
* #34995
* __->__ #34967