All instances of `old_io` and `old_path` were switched to use the new
shiny `std::io`, `std::net`, and `std::path` modules. This means that
`Request` and `Response` implement `Read` and `Write` now.
Because of the changes to `TcpListener`, this also takes the opportunity
to correct the method usage of `Server`. As with other
languages/frameworks, the server is first created with a handler, and
then a host/port is passed to a `listen` method. This reverses what
`Server` used to do.
Closes#347
BREAKING CHANGE: Check the docs. Everything was touched.
This is a modified and specialized thread pool meant for
managing an acceptor in a multi-threaded way. A single handler
is provided which will be invoked on each stream.
Unlike the old thread pool, this returns a join guard which
will block until the acceptor closes, enabling friendly behavior
for the listening guard.
The task pool itself is also faster as it only pays for message passing
if sub-threads panic. In the optimistic case where there are few panics,
this saves using channels for any other communication.
This improves performance by around 15%, all the way to 105k req/sec
on my machine, which usually gets about 90k.
BREAKING_CHANGE: server::Listening::await is removed.
Currently headers are exported at many places. For example you can access
`Transfer-Encoding` header at `header`, `header::common` and
`header::common::transfer_encoding`. Per discussion on IRC with
@seanmonstar and @reem, all contents of headers will be exposed at `header`
directly. Parsing utilities will be exposed at `header::parsing`. Header
macros can now be used from other crates.
This breaks much code using headers. It should use everything it needs
directly from `header::`, encodings are exposed at `header::Encoding::`,
connection options are exposed at `header::ConnectionOption`.
- Includes ergonomic traits like IntoUrl and IntoBody, allowing easy
usage.
- Client can have a RedirectPolicy.
- Client can have a SslVerifier.
Updated benchmarks for client. (Disabled rust-http client bench since it
hangs.)
Internals have been shuffled around such that Request and Reponse are
now given only a mutable reference to the stream, instead of being
allowed to consume it. This allows the server to re-use the streams if
keep-alive is true.
A task pool is used, and the number of the threads can currently be
adjusted by using the `listen_threads()` method on Server.
[breaking-change]
A connection is returned from Incoming.next(), and can be passed to a
separate thread before any parsing happens. Call conn.open() to get a
Result<(Request, Response)>.
BREAKING CHANGE
The client Request now uses the same system as a server Response to track
the write status of headers, and the API has been updated accordingly.
Additionally, the Request constructors have been moved onto the Request object
instead of being top-level hyper functions, as this better namespaces the
client and Server.
Trying to default the type parameters leads to an ICE and strange type errors.
I think this is just due to the experimental state of default type params and
this change can be rolled back when they are fixed.
Server and client benchmarks show that this makes very little difference
in performance and using dynamic dispatch here is significantly more ergonomic.
This also bounds NetworkStream with Send to prevent incorrect implementations.
Allows the implementation of mock streams for testing and flexibility.
Fixes#5
This introduces a new Trait, NetworkStream, which abstracts over
the functionality provided by TcpStream so that it can be easily
mocked and extended in testing and hyper can be used for
other connection sources.
Introduces two Phantom Types, Fresh and Streaming, which indicate the status
of a Response.
Response::start translates an Response<Fresh> into a
Response<Streaming> by writing the StatusCode and Headers.
Response<Fresh> allows modification of Headers and StatusCode, but does
not allow writing to the body. Response<Streaming> has the opposite privileges.
This introduces a bit of complexity to the example,
mainly the use of the try_continue! macro for dealing
with errors, but I think this is an appropriate trade-off
as users of this library are likely to be framework authors
instead of end users.