Update the list of root certificates in src/node_root_certs.h with
tools/mk-ca-bundle.pl.
Certificates added:
- GlobalSign Root CA - R6
- OISTE WISeKey Global Root GC CA
- GTS Root R1
- GTS Root R2
- GTS Root R3
- GTS Root R4
- UCA Global G2 Root
- UCA Extended Validation Root
- Certigna Root CA
Certificates removed:
- Visa eCommerce Root
- TÜRKTRUST Elektronik Sertifika Hizmet Sağlayıcısı H5
- Certplus Root CA G1
- Certplus Root CA G2
- OpenTrust Root CA G1
- OpenTrust Root CA G2
- OpenTrust Root CA G3
PR-URL: https://github.com/nodejs/node/pull/25113
Backport-PR-URL: https://github.com/nodejs/node/pull/29137
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
Some pty tests persistently hung on the AIX CI buildbots. Fix that by
adding a helper script that properly sets up the pty before spawning
the script under test.
On investigation I discovered that the test runner hung when it tried
to close the slave pty's file descriptor, probably due to a bug in
AIX's pty implementation. I could reproduce it with a short C program.
The test runner also leaked file descriptors to the child process.
I couldn't convince python's `subprocess.Popen()` to do what I wanted
it to do so I opted to move the logic to a helper script that can do
fork/setsid/etc. without having to worry about stomping on state in
tools/test.py.
In the process I also uncovered some bugs in the pty module of the
python distro that ships with macOS 10.14, leading me to reimplement
a sizable chunk of the functionality of that module.
And last but not least, of course there are differences between ptys
on different platforms and the helper script has to paper over that.
Of course.
Really, this commit took me longer to put together than I care to admit.
Caveat emptor: this commit takes the hacky ^D feeding to the slave out
of tools/test.py and puts it in the *.in input files. You can also feed
other control characters to tests, like ^C or ^Z, simply by inserting
them into the corresponding input file. I think that's nice.
Fixes: https://github.com/nodejs/build/issues/1820
Fixes: https://github.com/nodejs/node/issues/28489
PR-URL: https://github.com/nodejs/node/pull/28600
Backport-PR-URL: https://github.com/nodejs/node/pull/29599
Reviewed-By: Richard Lau <riclau@uk.ibm.com>
Reviewed-By: Sam Roberts <vieuxtech@gmail.com>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
This is a security release.
Notable changes:
Node.js, as well as many other implementations of HTTP/2, have been
found vulnerable to Denial of Service attacks.
See https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-002.md
for more information.
Vulnerabilities fixed:
* CVE-2019-9511 “Data Dribble”: The attacker requests a large amount of
data from a specified resource over multiple streams. They manipulate
window size and stream priority to force the server to queue the data
in 1-byte chunks. Depending on how efficiently this data is queued,
this can consume excess CPU, memory, or both, potentially leading to a
denial of service.
* CVE-2019-9512 “Ping Flood”: The attacker sends continual pings to an
HTTP/2 peer, causing the peer to build an internal queue of responses.
Depending on how efficiently this data is queued, this can consume
excess CPU, memory, or both, potentially leading to a denial of
service.
* CVE-2019-9513 “Resource Loop”: The attacker creates multiple request
streams and continually shuffles the priority of the streams in a way
that causes substantial churn to the priority tree. This can consume
excess CPU, potentially leading to a denial of service.
* CVE-2019-9514 “Reset Flood”: The attacker opens a number of streams
and sends an invalid request over each stream that should solicit a
stream of RST_STREAM frames from the peer. Depending on how the peer
queues the RST_STREAM frames, this can consume excess memory, CPU,or
both, potentially leading to a denial of service.
* CVE-2019-9515 “Settings Flood”: The attacker sends a stream of
SETTINGS frames to the peer. Since the RFC requires that the peer
reply with one acknowledgement per SETTINGS frame, an empty SETTINGS
frame is almost equivalent in behavior to a ping. Depending on how
efficiently this data is queued, this can consume excess CPU, memory,
or both, potentially leading to a denial of service.
* CVE-2019-9516 “0-Length Headers Leak”: The attacker sends a stream of
headers with a 0-length header name and 0-length header value,
optionally Huffman encoded into 1-byte or greater headers. Some
implementations allocate memory for these headers and keep the
allocation alive until the session dies. This can consume excess
memory, potentially leading to a denial of service.
* CVE-2019-9517 “Internal Data Buffering”: The attacker opens the HTTP/2
window so the peer can send without constraint; however, they leave
the TCP window closed so the peer cannot actually write (many of) the
bytes on the wire. The attacker then sends a stream of requests for a
large response object. Depending on how the servers queue the
responses, this can consume excess memory, CPU, or both, potentially
leading to a denial of service.
* CVE-2019-9518 “Empty Frames Flood”: The attacker sends a stream of
frames with an empty payload and without the end-of-stream flag. These
frames can be DATA, HEADERS, CONTINUATION and/or PUSH_PROMISE. The
peer spends time processing each frame disproportionate to attack
bandwidth. This can consume excess CPU, potentially leading to a
denial of service. (Discovered by Piotr Sikora of Google)
PR-URL: https://github.com/nodejs/node/pull/29152
If we are waiting for the ability to send more output, we should not
process more input. This commit a) makes us send output earlier,
during processing of input, if we accumulate a lot and b) allows
interrupting the call into nghttp2 that processes input data
and resuming it at a later time, if we do find ourselves in a position
where we are waiting to be able to send more output.
This is part of mitigating CVE-2019-9511/CVE-2019-9517.
Backport-PR-URL: https://github.com/nodejs/node/pull/29124
PR-URL: https://github.com/nodejs/node/pull/29122
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
If a write to the underlying socket finishes asynchronously, that
means that we cannot write any more data at that point without waiting
for it to finish. If this happens, we should also not be producing any
more input.
This is part of mitigating CVE-2019-9511/CVE-2019-9517.
Backport-PR-URL: https://github.com/nodejs/node/pull/29124
PR-URL: https://github.com/nodejs/node/pull/29122
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Limit the number of streams that are rejected upon creation. Since
each such rejection is associated with an `NGHTTP2_ENHANCE_YOUR_CALM`
error that should tell the peer to not open any more streams,
continuing to open streams should be read as a sign of a misbehaving
peer. The limit is currently set to 100 but could be changed or made
configurable.
This is intended to mitigate CVE-2019-9514.
Backport-PR-URL: https://github.com/nodejs/node/pull/29124
PR-URL: https://github.com/nodejs/node/pull/29122
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
Lazily allocate `ArrayBuffer`s for the contents of DATA frames.
Creating `ArrayBuffer`s is, sadly, not a cheap operation with V8.
This is part of performance improvements to mitigate CVE-2019-9513.
Together with the previous commit, these changes improve throughput
in the adversarial case by about 100 %, and there is little more
that we can do besides artificially limiting the rate of incoming
metadata frames (i.e. after this patch, CPU usage is virtually
exclusively in libnghttp2).
[This backport also applies changes from 83e1b97443 and required
some manual work due to the lack of `AllocatedBuffer` on v10.x.
More work was necessary for v8.x, including copying utilities
for `util.h` from more recent Node.js versions.]
Refs: https://github.com/nodejs/node/pull/26201
Backport-PR-URL: https://github.com/nodejs/node/pull/29124
PR-URL: https://github.com/nodejs/node/pull/29122
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
For some JS events, it only makes sense to call into JS when there
are listeners for the event in question.
The overhead is noticeable if a lot of these events are emitted during
the lifetime of a session. To reduce this overhead, keep track of
whether any/how many JS listeners are present, and if there are none,
skip calls into JS altogether.
This is part of performance improvements to mitigate CVE-2019-9513.
Backport-PR-URL: https://github.com/nodejs/node/pull/29124
PR-URL: https://github.com/nodejs/node/pull/29122
Reviewed-By: Rich Trott <rtrott@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>