Previously, any streams that were dropped or closed while not having
consumed the inflight received window capacity would simply leak that
capacity for the connection. This could easily happen if a `RecvStream`
were dropped before fully consuming the data, and therefore a user would
have no idea how much capacity to release in the first place. This
resulted in stalled connections that would never have capacity again.
I believe this was an oversight - a stream that is reset can still have some
capacity assigned to it (e.g. if said capacity was assigned in the same poll as
the reset), which should be redistributed.
Because `send_reset` called `recv_err`, which calls `reclaim_all_capacity`,
which eventually calls `transition(stream, ..)` -- all of which happens _before_
the RESET frame is enqueued -- it was possible for the stream to get unlinked
from the store (if there was any connection-level capacity to reassign). This
could then cause the stream to get "leaked" on drop/EOF since it would no longer
be iterated.
Fix this by delaying the call to `reclaim_all_capacity` _after_ enqueueing the
RESET frame.
A test demonstrating the issue is included.
* Prevent `pending_open` streams from being released.
This fixes a panic that would otherwise occur in some cases. A test
demonstrating said panic is included.
* Clear the pending_open queue together with everything else.
There was a race condition in the test where the server connection
sometimes closed before the final client opereation. This triggered an
unwrap in the test.
This patch updates the test to ensuree that the mock server connection
stays open until the client test is complete.
Because `self.pending` doesn't necessarily get cleaned up in a timely fashion -
rather, only when the user calls `poll_ready()` - it was possible for it to
refer to a stream that has already been closed. This would lead to a panic the
next time that `poll_ready()` was called.
Instead, use an `OpaqueStreamRef`, bumping the refcount.
A change to an existing test is included which demonstrates the issue.
Previously, monotonic stream IDs (spec 5.1.1) for push promises were not
enforced. This was due to push promises going through an entirely
separate code path than normally initiated streams.
This patch unifies the code path for initializing streams via both
HEADERS and PUSH_PROMISE. This is done by first calling `recv.open` in
both cases.
Closes#272
This patch includes two new significant debug assertions:
* Assert stream counts are zero when the connection finalizes.
* Assert all stream state has been released when the connection is
dropped.
These two assertions were added in an effort to test the fix provided
by #261. In doing so, many related bugs have been discovered and fixed.
The details related to these bugs can be found in #273.
In `clear_queue` we drop all the queued frames for a stream, but this doesn't
take into account a buffered frame inside of the `FramedWrite`. This can lead
to a panic when `reclaim_frame` tries to recover a frame onto a stream that has
already been destroyed, or in general cause wrong behaviour.
Instead, let's keep track of what frame is currently in-flight; then, when we
`clear_queue` a stream with an in-flight data frame, mark the frame to be
dropped instead of reclaimed.
- Adds `wait_for` that takes another future to signal the mock
should continue.
- Adds `yield_once` to allow one chain of futures to yield to the
other.
If graceful shutdown is initiated, a GOAWAY of the max stream ID - 1 is
sent, followed by a PING frame, to measure RTT. When the PING is ACKed,
the connection sends a new GOAWAY with the proper last processed stream
ID. From there, once all active streams have completely, the connection
will finally close.
Because streams that were being peer reset were not clearing pending
send frames / buffered_send_data, they were not being counted towards
the concurrency limit.
The HTTP/2.0 specification requires that the path pseudo header is never
empty for requests unless the request uses the OPTIONS method.
This is currently not correctly enforced.
This patch provides a test and a fix.
This assert does not hold as many streams can be pushed into
pending_capacity during a call to send_data(). See issue #224
for more discussion and sign-off.
Closes#224
Upgrade to env_logger 0.5 and log 0.4 so that projects that use those
versions don't have to build both those versions and the older ones
that h2 is currently using.
Don't enable the regex support in env_logger. Applications that want
the regex support can enable it themselves; this will happen
automatically when they add their env_logger dependency.
Disable the env_logger dependency in quickcheck.
The result of this is that there are fewer dependencies. For example,
regex and its dependencies are no longer required at all, as can be
seen by observing the changes to the Cargo.lock. That said,
env_logger 0.5 does add more dependencies itself; however it seems
applications are going to use env_logger 0.5 anyway so this is still
a net gain.
Submitted on behalf of Buoyant, Inc.
Signed-off-by: Brian Smith <brian@briansmith.org>
When all Streams are dropped / finished, the Connection was held
open until the peer hangs up. Instead, the Connection should hang up
once it knows that nothing more will be sent.
To fix this, we notify the Connection when a stream is no longer
referenced. On the Connection poll(), we check that there are no
active, held, reset streams or any references to the Streams
and transition to sending a GOAWAY if that is case.
The specific behavior depends on if running as a client or server.
This, uh, grew into something far bigger than expected, but it turns out, all of it was needed to eventually support this correctly.
- Adds configuration to client and server to set [SETTINGS_MAX_HEADER_LIST_SIZE](http://httpwg.org/specs/rfc7540.html#SETTINGS_MAX_HEADER_LIST_SIZE)
- If not set, a "sane default" of 16 MB is used (taken from golang's http2)
- Decoding header blocks now happens as they are received, instead of buffering up possibly forever until the last continuation frame is parsed.
- As each field is decoded, it's undecoded size is added to the total. Whenever a header block goes over the maximum size, the `frame` will be marked as such.
- Whenever a header block is deemed over max limit, decoding will still continue, but new fields will not be appended to `HeaderMap`. This is also can save wasted hashing.
- To protect against enormous string literals, such that they span multiple continuation frames, a check is made that the combined encoded bytes is less than the max allowed size. While technically not exactly what the spec suggests (counting decoded size instead), this should hopefully only happen when someone is indeed malicious. If found, a `GOAWAY` of `COMPRESSION_ERROR` is sent, and the connection shut down.
- After an oversize header block frame is finished decoding, the streams state machine will notice it is oversize, and handle that.
- If the local peer is a server, a 431 response is sent, as suggested by the spec.
- A `REFUSED_STREAM` reset is sent, since we cannot actually give the stream to the user.
- In order to be able to send both the 431 headers frame, and a reset frame afterwards, the scheduled `Canceled` machinery was made more general to a `Scheduled(Reason)` state instead.
Closes#18Closes#191
This patch renames a number of types and functions making
the API more consistent.
* `Server` -> `Connection`
* `Client` -> `SendRequest`
* `Respond` -> `SendResponse`.
It also moves the handshake fns off of `Connection` and make
them free fns in the module. And `Connection::builder` is removed
in favor of `Builder::new`.