There are many changes involved with this, but let's just talk about
user-facing changes.
- Creating a `Client` and `Server` now needs a Tokio `Core` event loop
to attach to.
- `Request` and `Response` both no longer implement the
`std::io::{Read,Write}` traits, but instead represent their bodies as a
`futures::Stream` of items, where each item is a `Chunk`.
- The `Client.request` method now takes a `Request`, instead of being
used as a builder, and returns a `Future` that resolves to `Response`.
- The `Handler` trait for servers is no more, and instead the Tokio
`Service` trait is used. This allows interoperability with generic
middleware.
BREAKING CHANGE: A big sweeping set of breaking changes.
We encountered some issues where the `Conn::ready()` would busy loop on
reads. Previously, the `ConnInner::can_read_more()` would not consider
whether the previous read got a WouldBlock error, and it didn't consider
whether the transport was blocked. Accounting for this additional state
fixes the busy loop problem.
It's possible that a connection will be closed and the only way to find
out is by doing a read. The keep-alive state (State::Init + Next_::Wait)
now tries to read on readable. In the case of EOF, it returns state
closed. If bytes are actually available, it's a connection error, and
the connection is closed. Otherwise, it's just a spurious wakeup.
Not handling this was an issue for keep-alive connections because
requests would get assigned to a closed connection and then immediately
error. Handling the HUP event makes this situation much less likely. It
is still possible however; consider the situation where a HUP arrives
while the event loop is busy processing new requests to add. The
connection is disconnected, but the HUP hasn't been processed, and a
request could be assigned to it. This case is, however, unlikely.
We've been seeing a strange number of timeouts in our benchmarking.
Handling spurious timeouts as in this patch seems to fix it!
Note that managing the `timeout_start` needs to be done carefully. If
the current time is provided in the wrong place, it's possible requests
would never timeout.
I've had a couple of instances during stress testing now where
Conn::ready would overflow its stack due to recursing on itself. This
moves subsequent calls to ready() into a loop outside the function.
We observed an issue where connection were not ever entering the
keep-alive state due to a bug with `State::update`. The issue is
resolved with resetting the write state to KeepAlive when it arrives as
KeepAlive. Otherwise, it would be marked incorrectly as Closed.
The `trace!` lines in here are useful for debugging keep-alive issues so
I've left them in.
All of the move semantics remain the same for http::Conn while
the self consumption and move semantics only require a pointer copy
now rather than copying larger amounts of data. This greatly improves
the performance of hyper, by my measurements about 125% faster when
benchmarking using wrk.