This should fix some busy looping when using OpenSSL. For example, if
the transport was blocked on a read, it wasn't surfaced to the
`http::Conn` and so the wrong interest was registered with the event
loop. Registering for the wrong interest triggered calls to
`http::Conn::ready()` which were unable to make progress.
We encountered some issues where the `Conn::ready()` would busy loop on
reads. Previously, the `ConnInner::can_read_more()` would not consider
whether the previous read got a WouldBlock error, and it didn't consider
whether the transport was blocked. Accounting for this additional state
fixes the busy loop problem.
The previous keep-alive strategy was to cycle connections in a
round-robin style. However, that will always keep more connections
around than are needed. This new strategy will allow extra connections
to expire when only a few are needed. This is accomplished by prefering
to reuse a connection that was just released to the pool over one that
has been there for a long time.
It's possible that a connection will be closed and the only way to find
out is by doing a read. The keep-alive state (State::Init + Next_::Wait)
now tries to read on readable. In the case of EOF, it returns state
closed. If bytes are actually available, it's a connection error, and
the connection is closed. Otherwise, it's just a spurious wakeup.
Not handling this was an issue for keep-alive connections because
requests would get assigned to a closed connection and then immediately
error. Handling the HUP event makes this situation much less likely. It
is still possible however; consider the situation where a HUP arrives
while the event loop is busy processing new requests to add. The
connection is disconnected, but the HUP hasn't been processed, and a
request could be assigned to it. This case is, however, unlikely.
We've been seeing a strange number of timeouts in our benchmarking.
Handling spurious timeouts as in this patch seems to fix it!
Note that managing the `timeout_start` needs to be done carefully. If
the current time is provided in the wrong place, it's possible requests
would never timeout.
I've had a couple of instances during stress testing now where
Conn::ready would overflow its stack due to recursing on itself. This
moves subsequent calls to ready() into a loop outside the function.
When loading up a client suddenly with thousands of connections, the
default DNS worker count of four cannot keep up and many requests
timeout as a result. Most people don't need a large pool, so making this
configurable is a natural choice.
We observed an issue where connection were not ever entering the
keep-alive state due to a bug with `State::update`. The issue is
resolved with resetting the write state to KeepAlive when it arrives as
KeepAlive. Otherwise, it would be marked incorrectly as Closed.
The `trace!` lines in here are useful for debugging keep-alive issues so
I've left them in.
In the scenario where a request is started on the `Client`, the client
has a full slab, and sockets for a *different* domain are idling in
keep-alive, the new request would previously cause the client to panic!.
This patch adds a `spawn_error` handler which attempts to evict an idle
connection to make space for the new request. If space cannot be made,
the error handler is run (passed `Error::Full`) and the `Handler` is
dropped.
This is a breaking change because of the new variant of `Error`.
Some inefficient use of `Vec` in the client was replaced with `VecDeque`
to support push/pop from either end.
Closes#896Closes#897
BREAKING CHANGE: `RequestUri::AbsolutePath` variant is changed to a struct variant. Consider using `req.path()` or `req.query()` to get the relevant slice.
Closes#891
BREAKING CHANGE: `Headers.remove()` used to return a `bool`,
it now returns `Option<H>`. To determine if a a header exists,
switch to `Headers.has()`.