* added weighted round robin algorithm to load balancer
* added an adapt integration test for wrr and fixed a typo
* changed args format to Caddyfile args convention
* added provisioner and validator for wrr
* simplified the code and improved doc
When only a single request has the least amount of requests, there's no need to compute a random number, because the modulo of 1 will always be 0 anyways.
* log: make `sink` encodable
* deduplicate logger fields
* extract common fields into `BaseLog` and embed it into `SinkLog`
* amend godoc on `BaseLog` and `SinkLog`
* minor style change
---------
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
* cmd: Expand cobra support
* Convert commands to cobra, add short flags
* Fix version command typo
Co-authored-by: Emily Lange <git@indeednotjames.com>
* Apply suggestions from code review
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
---------
Co-authored-by: Emily Lange <git@indeednotjames.com>
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
* added some tests for parseUpstreamDialAddress
Test 4 fails because it produces "[[::1]]:80" instead of "[::1]:80"
* support absolute windows path in unix reverse proxy address
* make IsUnixNetwork public, support +h2c and reuse it
* add new tests
* If upstreams are all using same host but with different ports
ie:
foobar:4001
foobar:4002
foobar:4003
...
Because fnv-1a has not a good enough avalanche effect
Then the hostByHashing result is not well balanced over
all upstreams
As last byte FNV input tend to affect few bits, the idea is to change
the concatenation order between the key and the upstream strings
So the upstream last byte have more impact on hash diffusion
* reverseproxy: Mask the WS close message when we're the client
* weakrand
* Bump golangci-lint version so path ignores work on Windows
* gofmt
* ugh, gofmt everything, I guess
e338648fed introduced multiple upstream
addresses. A comment notes that mixing schemes isn't supported and
therefore the first valid scheme is supposed to be used.
Fixes setting the first scheme.
fixes#5087
Ideally I'd just remove the parameter to caddy.Context.Logger(), but
this would break most Caddy plugins.
Instead, I'm making it variadic and marking it as partially deprecated.
In the future, I might completely remove the parameter once most
plugins have updated.
* feat: Multiple 'to' upstreams in reverse-proxy cmd
* Repeat --to for multiple upstreams, rather than comma-separating in a single flag
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
* reverseproxy: Close hijacked conns on reload/quit
We also send a Close control message to both ends of
WebSocket connections. I have tested this many times in
my dev environment with consistent success, although
the variety of scenarios was limited.
* Oops... actually call Close() this time
* CloseMessage --> closeMessage
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
* Use httpguts, duh
* Use map instead of sync.Map
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
* break up code and use lazy reading and pool bufio.Writer
* close underlying connection when operation failed
* allocate bufWriter and streamWriter only once
* refactor record writing
* rebase from master
* handle err
* Fix type assertion
Also reduce some duplication
* Refactor client and clientCloser for logging
Should reduce allocations
* Minor cosmetic adjustments; apply Apache license
* Appease the linter
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
This allows users to, for example, get upstreams from multiple SRV
endpoints in order (such as primary and secondary clusters).
Also, gofmt went to town on the comments, sigh
* reverseproxy: Implement retry count, alternative to try_duration
* Add Caddyfile support for `retry_match`
* Refactor to deduplicate matcher parsing logic
* Fix lint
See https://caddy.community/t/using-forward-auth-and-writing-my-own-authenticator-in-php/16410, apparently it didn't work when `copy_headers` wasn't used. This is because we were skipping adding a handler to the routes in the "good response handler", but this causes the logic in `reverseproxy.go` to ignore the response handler since it's empty. Instead, we can just always put in the `header` handler, even with an empty `Set` operation, it's just a no-op, but it fixes that condition in the proxy code.
* Make reverse proxy TLS server name replaceable for SNI upstreams.
* Reverted previous TLS server name replacement, and implemented thread safe version.
* Move TLS servername replacement into it's own function
* Moved SNI servername replacement into httptransport.
* Solve issue when dynamic upstreams use wrong protocol upstream.
* Revert previous commit.
Old commit was: Solve issue when dynamic upstreams use wrong protocol upstream.
Id: 3c9806ccb6
* Added SkipTLSPorts option to http transport.
* Fix typo in test config file.
* Rename config option as suggested by Matt
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
* Update code to match renamed config option.
* Fix typo in config option name.
* Fix another typo that I missed.
* Tests not completing because of apparent wrong ordering of options.
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
* Make reverse proxy TLS server name replaceable for SNI upstreams.
* Reverted previous TLS server name replacement, and implemented thread safe version.
* Move TLS servername replacement into it's own function
* Moved SNI servername replacement into httptransport.
* Solve issue when dynamic upstreams use wrong protocol upstream.
* Revert previous commit.
Old commit was: Solve issue when dynamic upstreams use wrong protocol upstream.
Id: 3c9806ccb6
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
In v2.5.0, upstream health was fixed such that whether an upstream is
considered healthy or not is mostly up to each individual handler's
config. Since "healthy" is an opinion, it is not a global value.
I unintentionally left in the "healthy" field in the API endpoint for
checking upstreams, and it is now misleading (see #4792).
However, num_requests and fails remains, so health can be determined by
the API client, rather than having it be opaquely (and unhelpfully)
determined for the client.
If we do restore this value later on, it'd need to be replicated once
per reverse_proxy handler according to their individual configs.
* reverseproxy: Improve hashing LB policies with HRW
Previously, if a list of upstreams changed, hash-based LB policies
would be greatly affected because the hash relied on the position of
upstreams in the pool. Highest Random Weight or "rendezvous" hashing
is apparently robust to pool changes. It runs in O(n) instead of
O(log n), but n is very small usually.
* Fix bug and update tests
* reverseproxy: Sync up `handleUpgradeResponse` with stdlib
I had left this as a TODO for when we bump to minimum 1.17, but I should've realized it was under `internal` so it couldn't be used directly.
Copied the functions we needed for parity. Hopefully this is ok!
* Add tests and fix godoc comments
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
* reverseproxy: New `copy_response` handler for `handle_response` routes
Followup to #4298 and #4388.
This adds a new `copy_response` handler which may only be used in `reverse_proxy`'s `handle_response` routes, which can be used to actually copy the proxy response downstream.
Previously, if `handle_response` was used (with routes, not the status code mode), it was impossible to use the upstream's response body at all, because we would always close the body, expecting the routes to write a new body from scratch.
To implement this, I had to refactor `h.reverseProxy()` to move all the code that came after the `HandleResponse` loop into a new function. This new function `h.finalizeResponse()` takes care of preparing the response by removing extra headers, dealing with trailers, then copying the headers and body downstream.
Since basically what we want `copy_response` to do is invoke `h.finalizeResponse()` at a configurable point in time, we need to pass down the proxy handler, the response, and some other state via a new `req.WithContext(ctx)`. Wrapping a new context is pretty much the only way we have to jump a few layers in the HTTP middleware chain and let a handler pick up this information. Feels a bit dirty, but it works.
Also fixed a bug with the `http.reverse_proxy.upstream.duration` placeholder, it always had the same duration as `http.reverse_proxy.upstream.latency`, but the former was meant to be the time taken for the roundtrip _plus_ copying/writing the response.
* Delete the "Content-Length" header if we aren't copying
Fixes a bug where the Content-Length will mismatch the actual bytes written if we skipped copying the response, so we get a message like this when using curl:
```
curl: (18) transfer closed with 18 bytes remaining to read
```
To replicate:
```
{
admin off
debug
}
:8881 {
reverse_proxy 127.0.0.1:8882 {
@200 status 200
handle_response @200 {
header Foo bar
}
}
}
:8882 {
header Content-Type application/json
respond `{"hello": "world"}` 200
}
```
* Implement `copy_response_headers`, with include/exclude list support
* Apply suggestions from code review
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>