This allows us to improve the performance. For now, a Cow is used
internally, so clients can set the host to a static value and no longer
need copies.
Later, we can change it to also possibly have a MemSlice.
BREAKING CHANGE: The fields of the `Host` header are no longer
available. Use the getter methods instead.
There are many changes involved with this, but let's just talk about
user-facing changes.
- Creating a `Client` and `Server` now needs a Tokio `Core` event loop
to attach to.
- `Request` and `Response` both no longer implement the
`std::io::{Read,Write}` traits, but instead represent their bodies as a
`futures::Stream` of items, where each item is a `Chunk`.
- The `Client.request` method now takes a `Request`, instead of being
used as a builder, and returns a `Future` that resolves to `Response`.
- The `Handler` trait for servers is no more, and instead the Tokio
`Service` trait is used. This allows interoperability with generic
middleware.
BREAKING CHANGE: A big sweeping set of breaking changes.
The previous keep-alive strategy was to cycle connections in a
round-robin style. However, that will always keep more connections
around than are needed. This new strategy will allow extra connections
to expire when only a few are needed. This is accomplished by prefering
to reuse a connection that was just released to the pool over one that
has been there for a long time.
We've been seeing a strange number of timeouts in our benchmarking.
Handling spurious timeouts as in this patch seems to fix it!
Note that managing the `timeout_start` needs to be done carefully. If
the current time is provided in the wrong place, it's possible requests
would never timeout.
I've had a couple of instances during stress testing now where
Conn::ready would overflow its stack due to recursing on itself. This
moves subsequent calls to ready() into a loop outside the function.
When loading up a client suddenly with thousands of connections, the
default DNS worker count of four cannot keep up and many requests
timeout as a result. Most people don't need a large pool, so making this
configurable is a natural choice.
In the scenario where a request is started on the `Client`, the client
has a full slab, and sockets for a *different* domain are idling in
keep-alive, the new request would previously cause the client to panic!.
This patch adds a `spawn_error` handler which attempts to evict an idle
connection to make space for the new request. If space cannot be made,
the error handler is run (passed `Error::Full`) and the `Handler` is
dropped.
This is a breaking change because of the new variant of `Error`.
Some inefficient use of `Vec` in the client was replaced with `VecDeque`
to support push/pop from either end.
Closes#896Closes#897
BREAKING CHANGE: `RequestUri::AbsolutePath` variant is changed to a struct variant. Consider using `req.path()` or `req.query()` to get the relevant slice.
Methods added to `Client` and `Server` to control read and write
timeouts of the underlying socket.
Keep-Alive is re-enabled by default on the server, with a default
timeout of 5 seconds.
BREAKING CHANGE: This adds 2 required methods to the `NetworkStream`
trait, `set_read_timeout` and `set_write_timeout`. Any local
implementations will need to add them.
Improve the compile-time of downstream crates that use RequestBuilder,
by not forcing them to remonomorphise and recompile its non-generic
methods when they use it: with this change, they can just call the
precompiled versions in the `hyper` object file(s). The `send` method is
the major culprit here, since it is quite large and complicated.
For an extreme example,
extern crate hyper;
fn main() {
hyper::Client::new().get("x").send().unwrap();
}
takes ~4s to compile before this patch (i.e. generic RequestBuilder) and
~2s after. (The time spent interacting with LLVM goes from 2.2s to
0.3s.)
BREAKING CHANGE: `RequestBuilder<U>` should be replaced by `RequestBuilder`.
When an Http11Message knows that the previous response should not
have included a body per RFC7230, and fails to parse the following
response, the bytes are shuffled along, checking for the start of the
next response.
Closes#640
BREAKING CHANGE: This changes the signature of HttpWriter.end(),
returning a `EndError` that is similar to std::io::IntoInnerError,
allowing HttpMessage to retrieve the broken connections and not panic.
The breaking change isn't exposed in any usage of the `Client` API,
but for anyone using `HttpWriter` directly, since this was technically
a public method, that change is breaking.
While these methods are marked unstable in libstd, this is behind a
feature flag, `timeouts`. The Client and Server both have
`set_read_timeout` and `set_write_timeout` methods, that will affect all
connections with that entity.
BREAKING CHANGE: Any custom implementation of NetworkStream must now
implement `set_read_timeout` and `set_write_timeout`, so those will
break. Most users who only use the provided streams should work with
no changes needed.
Closes#315
This will always be the last URL that was used by the Request, which is
useful for determining what the final URL was after redirection.
BREAKING CHANGE: Technically a break, since `Response::new()` takes an
additional argument. In practice, the only place that should have been
creating Responses directly is inside the Client, so it shouldn't
break anyone. If you were creating Responses manually, you'll need to
pass a Url argument.
BREAKING CHANGE: Server::https was changed to allow any implementation
of Ssl. Server in general was also changed. HttpConnector no longer
uses SSL; using HttpsConnector instead.
Connector::connect already used &self, and so would require
synchronization to be handled per connector anyway. Adding Sync to the
Client allows users to setup config for a Client once, such as using a
single connection Pool, and then making requests across multiple
threads.
Closes#254
BREAKING CHANGE: Connectors and Protocols passed to the `Client` must
now also have a `Sync` bounds, but this shouldn't break default usage.