This allows us to improve the performance. For now, a Cow is used
internally, so clients can set the host to a static value and no longer
need copies.
Later, we can change it to also possibly have a MemSlice.
BREAKING CHANGE: The fields of the `Host` header are no longer
available. Use the getter methods instead.
This removes the cookie crate, since it has an optional dependency on
openssl, which can cause massive breakage if toggled on. Instead, the
`Cookie` and `SetCookie` headers now just use a `String`. Anyone can
create any typed header, so it is easy to plug in different
implementations.
BREAKING CHANGE: The `Cookie` and `SetCookie` headers no longer use the
cookie crate. New headers can be written for any header, or the ones
provided in hyper can be accessed as strings.
There are many changes involved with this, but let's just talk about
user-facing changes.
- Creating a `Client` and `Server` now needs a Tokio `Core` event loop
to attach to.
- `Request` and `Response` both no longer implement the
`std::io::{Read,Write}` traits, but instead represent their bodies as a
`futures::Stream` of items, where each item is a `Chunk`.
- The `Client.request` method now takes a `Request`, instead of being
used as a builder, and returns a `Future` that resolves to `Response`.
- The `Handler` trait for servers is no more, and instead the Tokio
`Service` trait is used. This allows interoperability with generic
middleware.
BREAKING CHANGE: A big sweeping set of breaking changes.
Support for strict-origin and strict-origin-when-cross-origin in referer
policy required for imporving network security. This commit will attempt
to add missing pieces of referrer policy.
This should fix some busy looping when using OpenSSL. For example, if
the transport was blocked on a read, it wasn't surfaced to the
`http::Conn` and so the wrong interest was registered with the event
loop. Registering for the wrong interest triggered calls to
`http::Conn::ready()` which were unable to make progress.
We encountered some issues where the `Conn::ready()` would busy loop on
reads. Previously, the `ConnInner::can_read_more()` would not consider
whether the previous read got a WouldBlock error, and it didn't consider
whether the transport was blocked. Accounting for this additional state
fixes the busy loop problem.
The previous keep-alive strategy was to cycle connections in a
round-robin style. However, that will always keep more connections
around than are needed. This new strategy will allow extra connections
to expire when only a few are needed. This is accomplished by prefering
to reuse a connection that was just released to the pool over one that
has been there for a long time.
It's possible that a connection will be closed and the only way to find
out is by doing a read. The keep-alive state (State::Init + Next_::Wait)
now tries to read on readable. In the case of EOF, it returns state
closed. If bytes are actually available, it's a connection error, and
the connection is closed. Otherwise, it's just a spurious wakeup.
Not handling this was an issue for keep-alive connections because
requests would get assigned to a closed connection and then immediately
error. Handling the HUP event makes this situation much less likely. It
is still possible however; consider the situation where a HUP arrives
while the event loop is busy processing new requests to add. The
connection is disconnected, but the HUP hasn't been processed, and a
request could be assigned to it. This case is, however, unlikely.