This adds a `SendRequestExt` trait to h2-support, with a `get` method
that does a lot of the repeated request building stuff many test cases
were doing.
As a first step, the cleans up stream_states tests to use it.
Previously, any streams that were dropped or closed while not having
consumed the inflight received window capacity would simply leak that
capacity for the connection. This could easily happen if a `RecvStream`
were dropped before fully consuming the data, and therefore a user would
have no idea how much capacity to release in the first place. This
resulted in stalled connections that would never have capacity again.
I believe this was an oversight - a stream that is reset can still have some
capacity assigned to it (e.g. if said capacity was assigned in the same poll as
the reset), which should be redistributed.
Because `send_reset` called `recv_err`, which calls `reclaim_all_capacity`,
which eventually calls `transition(stream, ..)` -- all of which happens _before_
the RESET frame is enqueued -- it was possible for the stream to get unlinked
from the store (if there was any connection-level capacity to reassign). This
could then cause the stream to get "leaked" on drop/EOF since it would no longer
be iterated.
Fix this by delaying the call to `reclaim_all_capacity` _after_ enqueueing the
RESET frame.
A test demonstrating the issue is included.
* Prevent `pending_open` streams from being released.
This fixes a panic that would otherwise occur in some cases. A test
demonstrating said panic is included.
* Clear the pending_open queue together with everything else.
There was a race condition in the test where the server connection
sometimes closed before the final client opereation. This triggered an
unwrap in the test.
This patch updates the test to ensuree that the mock server connection
stays open until the client test is complete.
Because `self.pending` doesn't necessarily get cleaned up in a timely fashion -
rather, only when the user calls `poll_ready()` - it was possible for it to
refer to a stream that has already been closed. This would lead to a panic the
next time that `poll_ready()` was called.
Instead, use an `OpaqueStreamRef`, bumping the refcount.
A change to an existing test is included which demonstrates the issue.
Previously, monotonic stream IDs (spec 5.1.1) for push promises were not
enforced. This was due to push promises going through an entirely
separate code path than normally initiated streams.
This patch unifies the code path for initializing streams via both
HEADERS and PUSH_PROMISE. This is done by first calling `recv.open` in
both cases.
Closes#272
This patch includes two new significant debug assertions:
* Assert stream counts are zero when the connection finalizes.
* Assert all stream state has been released when the connection is
dropped.
These two assertions were added in an effort to test the fix provided
by #261. In doing so, many related bugs have been discovered and fixed.
The details related to these bugs can be found in #273.
In `clear_queue` we drop all the queued frames for a stream, but this doesn't
take into account a buffered frame inside of the `FramedWrite`. This can lead
to a panic when `reclaim_frame` tries to recover a frame onto a stream that has
already been destroyed, or in general cause wrong behaviour.
Instead, let's keep track of what frame is currently in-flight; then, when we
`clear_queue` a stream with an in-flight data frame, mark the frame to be
dropped instead of reclaimed.
- Adds `wait_for` that takes another future to signal the mock
should continue.
- Adds `yield_once` to allow one chain of futures to yield to the
other.
If graceful shutdown is initiated, a GOAWAY of the max stream ID - 1 is
sent, followed by a PING frame, to measure RTT. When the PING is ACKed,
the connection sends a new GOAWAY with the proper last processed stream
ID. From there, once all active streams have completely, the connection
will finally close.
Because streams that were being peer reset were not clearing pending
send frames / buffered_send_data, they were not being counted towards
the concurrency limit.