This is a modified and specialized thread pool meant for
managing an acceptor in a multi-threaded way. A single handler
is provided which will be invoked on each stream.
Unlike the old thread pool, this returns a join guard which
will block until the acceptor closes, enabling friendly behavior
for the listening guard.
The task pool itself is also faster as it only pays for message passing
if sub-threads panic. In the optimistic case where there are few panics,
this saves using channels for any other communication.
This improves performance by around 15%, all the way to 105k req/sec
on my machine, which usually gets about 90k.
BREAKING_CHANGE: server::Listening::await is removed.
Currently headers are exported at many places. For example you can access
`Transfer-Encoding` header at `header`, `header::common` and
`header::common::transfer_encoding`. Per discussion on IRC with
@seanmonstar and @reem, all contents of headers will be exposed at `header`
directly. Parsing utilities will be exposed at `header::parsing`. Header
macros can now be used from other crates.
This breaks much code using headers. It should use everything it needs
directly from `header::`, encodings are exposed at `header::Encoding::`,
connection options are exposed at `header::ConnectionOption`.
Implements the missing enum cases in Http* and adds a new
method to the default Server implementation to take advantage
of the new TLS support
Closes#1
- Some stray deriving -> derive changes
- use::{mod} -> use::{self}
- fmt.write -> fmt.write_str
This does not catch the last case of fmt.write_str in the
Show impl of a Header Item. This will need to be changed
separately.
- Includes ergonomic traits like IntoUrl and IntoBody, allowing easy
usage.
- Client can have a RedirectPolicy.
- Client can have a SslVerifier.
Updated benchmarks for client. (Disabled rust-http client bench since it
hangs.)
Internals have been shuffled around such that Request and Reponse are
now given only a mutable reference to the stream, instead of being
allowed to consume it. This allows the server to re-use the streams if
keep-alive is true.
A task pool is used, and the number of the threads can currently be
adjusted by using the `listen_threads()` method on Server.
[breaking-change]
Intertwining was a nice feature, but it slows down hyper significantly,
so it is being removed.
There is some fallout from this, mainly that Incoming has had its type
parameter changed to `<A = HttpAcceptor>` and Handler receiving one
bounded with `A: NetworkAcceptor`.
[breaking-change]
Fixes#112
A connection is returned from Incoming.next(), and can be passed to a
separate thread before any parsing happens. Call conn.open() to get a
Result<(Request, Response)>.
BREAKING CHANGE
Trying to default the type parameters leads to an ICE and strange type errors.
I think this is just due to the experimental state of default type params and
this change can be rolled back when they are fixed.
Expose Server::many for creating a Server that will listen on many (ip, port)
pairs.
Handler still receives a simple Iterator of (Request, Response) pairs.
This is a breaking change since it changes the representation of Listener,
but Handler and Server::http are unchanged in their API.
Fixes#7
The client benchmarks did not have to be changed at all for this whole
refactor, and the server benchmark only had to specify a single type parameter,
and only because it writes out the type of Listener, which is not normal
usage.