h2::Error now knows whether protocol errors happened because the user
sent them, because it was received from the remote peer, or because
the library itself emitted an error because it detected a protocol
violation.
It also keeps track of whether it came from a RST_STREAM or GO_AWAY
frame, and in the case of the latter, it includes the additional
debug data if any.
Fixes#530
If a user stored a `StreamRef` to the same stream in the request or
response extensions, they would be dropped while the internal store lock
was held. That would lead to a deadlock, since dropping a stream ref
will try to take the store lock to clean up.
Clear extensions of Request and Response before locking store, prevent
this.
Fixeshyperium/hyper#2621
We've adopted `tracing` for diagnostics, but currently, it is just being
used as a drop-in replacement for the `log` crate. Ideally, we would
want to start emitting more structured diagnostics, using `tracing`'s
`Span`s and structured key-value fields.
A lot of the logging in `h2` is already written in a style that imitates
the formatting of structured key-value logs, but as textual log
messages. Migrating the logs to structured `tracing` events therefore is
pretty easy to do. I've also started adding spans, mostly in the read
path.
Finally, I've updated the tests to use `tracing` rather than
`env_logger`. The tracing setup happens in a macro, so that a span for
each test with the test's name can be generated and entered. This will
make the test output easier to read if multiple tests are run
concurrently with `--nocapture`.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This adds a `SendRequestExt` trait to h2-support, with a `get` method
that does a lot of the repeated request building stuff many test cases
were doing.
As a first step, the cleans up stream_states tests to use it.
Previously, any streams that were dropped or closed while not having
consumed the inflight received window capacity would simply leak that
capacity for the connection. This could easily happen if a `RecvStream`
were dropped before fully consuming the data, and therefore a user would
have no idea how much capacity to release in the first place. This
resulted in stalled connections that would never have capacity again.
I believe this was an oversight - a stream that is reset can still have some
capacity assigned to it (e.g. if said capacity was assigned in the same poll as
the reset), which should be redistributed.
Because `send_reset` called `recv_err`, which calls `reclaim_all_capacity`,
which eventually calls `transition(stream, ..)` -- all of which happens _before_
the RESET frame is enqueued -- it was possible for the stream to get unlinked
from the store (if there was any connection-level capacity to reassign). This
could then cause the stream to get "leaked" on drop/EOF since it would no longer
be iterated.
Fix this by delaying the call to `reclaim_all_capacity` _after_ enqueueing the
RESET frame.
A test demonstrating the issue is included.