This changes all the extern C functions in `hyper::ffi` to check passed
pointer arguments for being `NULL` before trying to use them. Before, we
would just assume the programmer had passed a good pointer, which could
result in segmentation faults. Now:
- In debug builds, it will assert they aren't null, and so if they are,
a message identifying the argument name will be printed and then the
process will crash.
- In release builds, it will still check for null, but if found, it will
return early, with a return value indicating failure if the return type
allows (such as returning NULL, or `HYPERE_INVALID_ARG`).
Closes#2620
Fix the header title-casing to work with consecutive
dashes. Previously with two dashes in a row the first dash would
uppercase the second dash which would then not count, so
`weird--header` would be cased as `Weird--header` instead of
`Weird--Header`.
These options are currently available on the high-level builder only.
Along the way, rename the setters to follow the public API conventions
and add docs.
Closes#2461
When a `CONNECT` over HTTP2 has been established, and the user tries to write data right when the peer closes the stream, it will no longer return as a "user error". The reset code is checked, and converted into an appropriate `io::ErrorKind`.
We don't really care what bytes are in chunked extensions. We ignore
them until we find a CRLF. However, some other HTTP implementations may
only look for a LF, and forget that chunked requires the CR as well. To
save them from themselves, this makes hyper reject any chunked
extensions that include an LF byte.
This isn't a *bug*. No one ever cares what's in the extensions. This is
meant as a way to help implementations that don't decoded chunked
encoding correctly. This shouldn't affect really anyone in the real
world.
When http2_only is true, we never try to open a new connection if there
is one open already, which means that if the existing connection that
gets checked out of the pool is closed, then the request won't happen.
That Proxy-Authenticate and Proxy-Authorization are forbidden over h2
is not actually specified anywhere, plus h2 also supports CONNECT
requests, which are specifically made to do requests over a proxy,
and those proxies may require authentication, sometimes through
Proxy-Authorization.
Note that there is an openwebdocs project that just started to clear
up any MDN-induced confusion in implementations:
https://github.com/openwebdocs/project/issues/43
This defines an extension type used in requests for the client that is
used to setup a callback for receipt of informational (1xx) responses.
The type isn't currently public, and is only usable in the C API.
The HTTP/1 content-length parser would accept lengths that were prefixed
with a plus sign (for example, `+1234`). The specification restricts the
content-length header to only allow DIGITs, making such a content-length
illegal. Since some HTTP implementations protect against that, and
others mis-interpret the length when the plus sign is present, this
fixes hyper to always reject such content lengths.
See GHSA-f3pg-qwvg-p99c
The HTTP/1 chunked decoder, when decoding the size of a chunk, could
overflow the size if the hex digits were too large. This fixes it by
adding an overflow check in the decoder.
See GHSA-5h46-h7hh-c6x9
Note the practical affects of this change:
- Dependency count with --features full dropped from 65 to 55.
- Time to compile after a clean dropped from 48s to 35s (on a pretty underpowered VM).
Closes#2388
If the write buffer was filled with large bufs from the user, such that
it couldn't be fully written to the transport, the write buffer could
start to grow significantly as it moved its cursor without shifting over
the unwritten bytes.
This will now try to shift over the unwritten bytes if the next buf
wouldn't fit in the already allocated space.
This introduces a delay to sending a ping to calculate the BDP that
becomes shorter as the BDP is changing, to improve throughput quickly,
but then also becomes longer as the BDP stabilizes, to reduce the amount
of pings sent. This improved the performance of the adaptive window
end_to_end benchmark.
It should also reduce the amount of pings the remote has to deal with,
hopefully preventing hyper from triggering ENHANCE_YOUR_CALM errors.
The discussion in #2462 opened up some larger questions about more comprehensive approaches to the
error API, with the agreement that additional methods would be desirable in the short term. These
methods address an immediate need of our customers, so I would like to get them in first before we
flesh out a future solution.
One potentially controversial choice here is to still return `true` from `is_parse_error()` for
these variants. I hope the naming of the methods make it clear that the new predicates are
refinements of the existing one, but I didn't want to change the behavior of `is_parse_error()`
which would require a major version bump.
It can sometimes be tricky to discover where to use `move` closures,
`async move {}`, and `.clone()` when creating a server. This adds a
slightly more bigger example that will hopefully help some.
Fixes https://github.com/hyperium/hyper/issues/2446
Decouple preserving header case from FFI:
The feature is now supported in both the server and the client
and can be combined with the title case feature, for headers
which don't have entries in the header case map.
Closes#2313
As I understand it, "cargo rustc" in gen_header.sh generates a ton of
errors, but still manages to generate an object that can be used by
cbindgen to generate hyper.h.
However, I tried to make a separate change to add more fields to
hyper.h, and learned that "cargo rustc" stops if it reaches 50 errors,
which I reached. I was able to buy some headroom and fix a number of
the compilation errors by adding imports to the fake Cargo.toml we
generate in gen_header.sh.
I wasn't sure how to resolve imports like "crate::Result" which appear
to reference the top-level src/error.rs, and print an error when they
are compiled in gen_header.sh. But I only need to buy headroom under
the 50 error count for now, which I was able to do by adding the
imports.
It is possible that someone more familiar with Rust than me could look
at this and know what to change to get the total number of errors to
zero.