mirror of
https://github.com/caddyserver/caddy.git
synced 2024-12-26 21:53:48 +03:00
Merge branch 'caddyserver:master' into master
This commit is contained in:
commit
45d5064f1c
490 changed files with 69695 additions and 11252 deletions
5
.editorconfig
Normal file
5
.editorconfig
Normal file
|
@ -0,0 +1,5 @@
|
|||
[*]
|
||||
end_of_line = lf
|
||||
|
||||
[caddytest/integration/caddyfile_adapt/*.caddyfiletest]
|
||||
indent_style = tab
|
1
.gitattributes
vendored
Normal file
1
.gitattributes
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
*.go text eol=lf
|
53
.github/CONTRIBUTING.md
vendored
53
.github/CONTRIBUTING.md
vendored
|
@ -1,7 +1,7 @@
|
|||
Contributing to Caddy
|
||||
=====================
|
||||
|
||||
Welcome! Thank you for choosing to be a part of our community. Caddy wouldn't be great without your involvement!
|
||||
Welcome! Thank you for choosing to be a part of our community. Caddy wouldn't be nearly as excellent without your involvement!
|
||||
|
||||
For starters, we invite you to join [the Caddy forum](https://caddy.community) where you can hang out with other Caddy users and developers.
|
||||
|
||||
|
@ -23,38 +23,50 @@ Other menu items:
|
|||
|
||||
### Contributing code
|
||||
|
||||
You can have a huge impact on the project by helping with its code. To contribute code to Caddy, open a [pull request](https://github.com/caddyserver/caddy/pulls) (PR). If you're new to our community, that's okay: **we gladly welcome pull requests from anyone, regardless of your native language or coding experience.** You can get familiar with Caddy's code base by using [code search at Sourcegraph](https://sourcegraph.com/github.com/caddyserver/caddy/-/search).
|
||||
You can have a huge impact on the project by helping with its code. To contribute code to Caddy, first submit or comment in an issue to discuss your contribution, then open a [pull request](https://github.com/caddyserver/caddy/pulls) (PR). If you're new to our community, that's okay: **we gladly welcome pull requests from anyone, regardless of your native language or coding experience.** You can get familiar with Caddy's code base by using [code search at Sourcegraph](https://sourcegraph.com/github.com/caddyserver/caddy).
|
||||
|
||||
We hold contributions to a high standard for quality :bowtie:, so don't be surprised if we ask for revisions—even if it seems small or insignificant. Please don't take it personally. :blue_heart: If your change is on the right track, we can guide you to make it mergable.
|
||||
We hold contributions to a high standard for quality :bowtie:, so don't be surprised if we ask for revisions—even if it seems small or insignificant. Please don't take it personally. :blue_heart: If your change is on the right track, we can guide you to make it mergeable.
|
||||
|
||||
Here are some of the expectations we have of contributors:
|
||||
|
||||
- **Open an issue to propose your change first.** This way we can avoid confusion, coordinate what everyone is working on, and ensure that any changes are in-line with the project's goals and the best interests of its users. We can also discuss the best possible implementation. If there's already an issue about it, comment on the existing issue to claim it.
|
||||
- **Open an issue to propose your change first.** This way we can avoid confusion, coordinate what everyone is working on, and ensure that any changes are in-line with the project's goals and the best interests of its users. We can also discuss the best possible implementation. If there's already an issue about it, comment on the existing issue to claim it. A lot of valuable time can be saved by discussing a proposal first.
|
||||
|
||||
- **Keep pull requests small.** Smaller PRs are more likely to be merged because they are easier to review! We might ask you to break up large PRs into smaller ones. [An example of what we want to avoid.](https://twitter.com/iamdevloper/status/397664295875805184)
|
||||
|
||||
- **Keep related commits together in a PR.** We do want pull requests to be small, but you should also keep multiple related commits in the same PR if they rely on each other.
|
||||
|
||||
- **Write tests.** Tests are essential! Written properly, they ensure your change works, and that other changes in the future won't break your change. CI checks should pass.
|
||||
- **Write tests.** Good, automated tests are very valuable! Written properly, they ensure your change works, and that other changes in the future won't break your change. CI checks should pass.
|
||||
|
||||
- **Benchmarks should be included for optimizations.** Optimizations sometimes make code harder to read or have changes that are less than obvious. They should be proven with benchmarks or profiling.
|
||||
- **Benchmarks should be included for optimizations.** Optimizations sometimes make code harder to read or have changes that are less than obvious. They should be proven with benchmarks and profiling.
|
||||
|
||||
- **[Squash](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html) insignificant commits.** Every commit should be significant. Commits which merely rewrite a comment or fix a typo can be combined into another commit that has more substance. Interactive rebase can do this, or a simpler way is `git reset --soft <diverging-commit>` then `git commit -s`.
|
||||
|
||||
- **Own your contributions.** Caddy is a growing project, and it's much better when individual contributors help maintain their change after it is merged.
|
||||
- **Be responsible for and maintain your contributions.** Caddy is a growing project, and it's much better when individual contributors help maintain their change after it is merged.
|
||||
|
||||
- **Use comments properly.** We expect good godoc comments for package-level functions, types, and values. Comments are also useful whenever the purpose for a line of code is not obvious.
|
||||
|
||||
We often grant [collaborator status](#collaborator-instructions) to contributors who author one or more significant, high-quality PRs that are merged into the code base!
|
||||
- **Pull requests may still get closed.** The longer a PR stays open and idle, the more likely it is to be closed. If we haven't reviewed it in a while, it probably means the change is not a priority. Please don't take this personally, we're trying to balance a lot of tasks! If nobody else has commented or reacted to the PR, it likely means your change is useful only to you. The reality is this happens quite a lot. We don't tend to accept PRs that aren't generally helpful. For these reasons or others, the PR may get closed even after a review. We are not obligated to accept all proposed changes, even if the best justification we can give is something vague like, "It doesn't sit right." Sometimes PRs are just the wrong thing or the wrong time. Because it is open source, you can always build your own modified version of Caddy with a change you need, even if we reject it in the official repo. Plus, because Caddy is extensible, it's possible your feature could make a great plugin instead!
|
||||
|
||||
- **You certify that you wrote and comprehend the code you submit.** The Caddy project welcomes original contributions that comply with [our CLA](https://cla-assistant.io/caddyserver/caddy), meaning that authors must be able to certify that they created or have rights to the code they are contributing. In addition, we require that code is not simply copy-pasted from Q/A sites or AI language models without full comprehension and rigorous testing. In other words: contributors are allowed to refer to communities for assistance and use AI tools such as language models for inspiration, but code which originates from or is assisted by these resources MUST be:
|
||||
|
||||
- Licensed for you to freely share
|
||||
- Fully comprehended by you (be able to explain every line of code)
|
||||
- Verified by automated tests when feasible, or thorough manual tests otherwise
|
||||
|
||||
We have found that current language models (LLMs, like ChatGPT) may understand code syntax and even problem spaces to an extent, but often fail in subtle ways to convey true knowledge and produce correct algorithms. Integrated tools such as GitHub Copilot and Sourcegraph Cody may be used for inspiration, but code generated by these tools still needs to meet our criteria for licensing, human comprehension, and testing. These tools may be used to help write code comments and tests as long as you can certify they are accurate and correct. Note that it is often more trouble than it's worth to certify that Copilot (for example) is not giving you code that is possibly plagiarised, unlicensed, or licensed with incompatible terms -- as the Caddy project cannot accept such contributions. If that's too difficult for you (or impossible), then we recommend using these resources only for inspiration and write your own code. Ultimately, you (the contributor) are responsible for the code you're submitting.
|
||||
|
||||
As a courtesy to reviewers, we kindly ask that you disclose when contributing code that was generated by an AI tool or copied from another website so we can be aware of what to look for in code review.
|
||||
|
||||
We often grant [collaborator status](#collaborator-instructions) to contributors who author one or more significant, high-quality PRs that are merged into the code base.
|
||||
|
||||
|
||||
#### HOW TO MAKE A PULL REQUEST TO CADDY
|
||||
|
||||
Contributing to Go projects on GitHub is fun and easy. We recommend the following workflow:
|
||||
Contributing to Go projects on GitHub is fun and easy. After you have proposed your change in an issue, we recommend the following workflow:
|
||||
|
||||
1. [Fork this repo](https://github.com/caddyserver/caddy). This makes a copy of the code you can write to.
|
||||
|
||||
2. If you don't already have this repo (caddyserver/caddy.git) repo on your computer, get it with `go get github.com/caddyserver/caddy/v2`.
|
||||
2. If you don't already have this repo (caddyserver/caddy.git) repo on your computer, clone it down: `git clone https://github.com/caddyserver/caddy.git`
|
||||
|
||||
3. Tell git that it can push the caddyserver/caddy.git repo to your fork by adding a remote: `git remote add myfork https://github.com/<your-username>/caddy.git`
|
||||
|
||||
|
@ -85,9 +97,9 @@ Many people on the forums could benefit from your experience and expertise, too.
|
|||
|
||||
Like every software, Caddy has its flaws. If you find one, [search the issues](https://github.com/caddyserver/caddy/issues) to see if it has already been reported. If not, [open a new issue](https://github.com/caddyserver/caddy/issues/new) and describe the bug, and somebody will look into it! (This repository is only for Caddy and its standard modules.)
|
||||
|
||||
**You can help stop bugs in their tracks!** Speed up the patch by identifying the bug in the code. This can sometimes be done by adding `fmt.Println()` statements (or similar) in relevant code paths to narrow down where the problem may be. It's a good way to [introduce yourself to the Go language](https://tour.golang.org), too.
|
||||
**You can help us fix bugs!** Speed up the patch by identifying the bug in the code. This can sometimes be done by adding `fmt.Println()` statements (or similar) in relevant code paths to narrow down where the problem may be. It's a good way to [introduce yourself to the Go language](https://tour.golang.org), too.
|
||||
|
||||
Please follow the issue template so we have all the needed information. Unredacted—yes, actual values matter. We need to be able to repeat the bug using your instructions. Please simplify the issue as much as possible. The burden is on you to convince us that it is actually a bug in Caddy. This is easiest to do when you write clear, concise instructions so we can reproduce the behavior (even if it seems obvious). The more detailed and specific you are, the faster we will be able to help you!
|
||||
We may reply with an issue template. Please follow the template so we have all the needed information. Unredacted—yes, actual values matter. We need to be able to repeat the bug using your instructions. Please simplify the issue as much as possible. If you don't, we might close your report. The burden is on you to make it easily reproducible and to convince us that it is actually a bug in Caddy. This is easiest to do when you write clear, concise instructions so we can reproduce the behavior (even if it seems obvious). The more detailed and specific you are, the faster we will be able to help you!
|
||||
|
||||
We suggest reading [How to Report Bugs Effectively](http://www.chiark.greenend.org.uk/~sgtatham/bugs.html).
|
||||
|
||||
|
@ -98,11 +110,12 @@ Please be kind. :smile: Remember that Caddy comes at no cost to you, and you're
|
|||
Maintainers---or more generally, developers---need three things to act on bugs:
|
||||
|
||||
1. To agree or be convinced that it's a bug (reporter's responsibility).
|
||||
- A bug is undesired or surprising behavior which violates documentation or the spec.
|
||||
- A bug is unintentional, undesired, or surprising behavior which violates documentation or relevant spec. It might be either a mistake in the documentation or a bug in the code.
|
||||
- This project usually does not work around bugs in other software, systems, and dependencies; instead, we recommend that those bugs are fixed at their source. This sometimes means we close issues or reject PRs that attempt to fix, workaround, or hide bugs in other projects.
|
||||
|
||||
2. To be able to understand what is happening (mostly reporter's responsibility).
|
||||
- If the reporter can provide satisfactory instructions such that a developer can reproduce the bug, the developer will likely be able to understand the bug, write a test case, and implement a fix.
|
||||
- Otherwise, the burden is on the reporter to test possible solutions. This is discouraged because it loosens the feedback loop, slows down debugging efforts, obscures the true nature of the problem from the developers, and is unlikely to result in new test cases.
|
||||
- If the reporter can provide satisfactory instructions such that a developer can reproduce the bug, the developer will likely be able to understand the bug, write a test case, and implement a fix. This is the least amount of work for everyone and path to the fastest resolution.
|
||||
- Otherwise, the burden is on the reporter to test possible solutions. This is less preferable because it loosens the feedback loop, slows down debugging efforts, obscures the true nature of the problem from the developers, and is unlikely to result in new test cases.
|
||||
|
||||
3. A solution, or ideas toward a solution (mostly maintainer's responsibility).
|
||||
- Sometimes the best solution is a documentation change.
|
||||
|
@ -112,7 +125,7 @@ Maintainers---or more generally, developers---need three things to act on bugs:
|
|||
|
||||
Thus, at the very least, the reporter is expected to:
|
||||
|
||||
1. Convince the reader that it's a bug (if it's not obvious).
|
||||
1. Convince the reader that it's a bug in Caddy (if it's not obvious).
|
||||
2. Reduce the problem down to the minimum specific steps required to reproduce it.
|
||||
|
||||
The maintainer is usually able to do the rest; but of course the reporter may invest additional effort to speed up the process.
|
||||
|
@ -123,7 +136,7 @@ The maintainer is usually able to do the rest; but of course the reporter may in
|
|||
|
||||
First, [search to see if your feature has already been requested](https://github.com/caddyserver/caddy/issues). If it has, you can add a :+1: reaction to vote for it. If your feature idea is new, open an issue to request the feature. Please describe your idea thoroughly so that we know how to implement it! Really vague requests may not be helpful or actionable and, without clarification, will have to be closed.
|
||||
|
||||
While we really do value your requests and implement many of them, not all features are a good fit for Caddy. Most of those [make good modules](#writing-a-caddy-module), which can be made by anyone! But if a feature is not in the best interest of the Caddy project or its users in general, we may politely decline to implement it into Caddy core.
|
||||
While we really do value your requests and implement many of them, not all features are a good fit for Caddy. Most of those [make good modules](#writing-a-caddy-module), which can be made by anyone! But if a feature is not in the best interest of the Caddy project or its users in general, we may politely decline to implement it into Caddy core. Additionally, some features are bad ideas altogether (for either obvious or non-obvious reasons) which may be rejected. We'll try to explain why we reject a feature, but sometimes the best we can do is, "It's not a good fit for the project."
|
||||
|
||||
|
||||
### Improving documentation
|
||||
|
@ -132,11 +145,11 @@ Caddy's documentation is available at [https://caddyserver.com/docs](https://cad
|
|||
|
||||
Note that third-party module documentation is not hosted by the Caddy website, other than basic usage examples. They are managed by the individual module authors, and you will have to contact them to change their documentation.
|
||||
|
||||
|
||||
Our documentation is scoped to the Caddy project only: it is not for describing how other software or systems work, even if they relate to Caddy or web servers. That kind of content [can be found in our community wiki](https://caddy.community/c/wiki/13), however.
|
||||
|
||||
## Collaborator Instructions
|
||||
|
||||
Collaborators have push rights to the repository. We grant this permission after one or more successful, high-quality PRs are merged! We thank them for their help.The expectations we have of collaborators are:
|
||||
Collaborators have push rights to the repository. We grant this permission after one or more successful, high-quality PRs are merged! We thank them for their help. The expectations we have of collaborators are:
|
||||
|
||||
- **Help review pull requests.** Be meticulous, but also kind. We love our contributors, but we critique the contribution to make it better. Multiple, thorough reviews make for the best contributions! Here are some questions to consider:
|
||||
- Can the change be made more elegant?
|
||||
|
@ -167,7 +180,7 @@ Collaborators have push rights to the repository. We grant this permission after
|
|||
|
||||
|
||||
|
||||
## Values
|
||||
## Values (WIP)
|
||||
|
||||
- A person is always more important than code. People don't like being handled "efficiently". But we can still process issues and pull requests efficiently while being kind, patient, and considerate.
|
||||
|
||||
|
|
48
.github/SECURITY.md
vendored
48
.github/SECURITY.md
vendored
|
@ -2,26 +2,58 @@
|
|||
|
||||
The Caddy project would like to make sure that it stays on top of all practically-exploitable vulnerabilities.
|
||||
|
||||
Some security problems are more the result of interplay between different components of the Web, rather than a vulnerability in the web server itself. Please report only vulnerabilities in the web server itself, as we cannot coerce the rest of the Web to be fixed (for example, we do not consider IP spoofing or BGP hijacks a vulnerability in the Caddy web server).
|
||||
|
||||
Please note that we consider publicly-registered domain names to be public information. This necessary in order to maintain the integrity of certificate transparency, public DNS, and other public trust systems.
|
||||
|
||||
## Supported Versions
|
||||
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
| 2.x | :white_check_mark: |
|
||||
| 1.x | :white_check_mark: (deprecating soon) |
|
||||
| 2.x | ✔️ |
|
||||
| 1.x | :x: |
|
||||
| < 1.x | :x: |
|
||||
|
||||
|
||||
## Acceptable Scope
|
||||
|
||||
A security report must demonstrate a security bug in the source code from this repository.
|
||||
|
||||
Some security problems are the result of interplay between different components of the Web, rather than a vulnerability in the web server itself. Please only report vulnerabilities in the web server itself, as we cannot coerce the rest of the Web to be fixed (for example, we do not consider IP spoofing, BGP hijacks, or missing/misconfigured HTTP headers a vulnerability in the Caddy web server).
|
||||
|
||||
Vulnerabilities caused by misconfigurations are out of scope. Yes, it is entirely possible to craft and use a configuration that is unsafe, just like with every other web server; we recommend against doing that.
|
||||
|
||||
We do not accept reports if the steps imply or require a compromised system or third-party software, as we cannot control those. We expect that users secure their own systems and keep all their software patched. For example, if untrusted users are able to upload/write/host arbitrary files in the web root directory, it is NOT a security bug in Caddy if those files get served to clients; however, it _would_ be a valid report if a bug in Caddy's source code unintentionally gave unauthorized users the ability to upload unsafe files or delete files without relying on an unpatched system or piece of software.
|
||||
|
||||
Client-side exploits are out of scope. In other words, it is not a bug in Caddy if the web browser does something unsafe, even if the downloaded content was served by Caddy. (Those kinds of exploits can generally be mitigated by proper configuration of HTTP headers.) As a general rule, the content served by Caddy is not considered in scope because content is configurable by the site owner or the associated web application.
|
||||
|
||||
Security bugs in code dependencies (including Go's standard library) are out of scope. Instead, if a dependency has patched a relevant security bug, please feel free to open a public issue or pull request to update that dependency in our code.
|
||||
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Please email Matt Holt (the author) directly: Matthew dot Holt at Gmail.
|
||||
We get a lot of difficult reports that turn out to be invalid. Clear, obvious reports tend to be the most credible (but are also rare).
|
||||
|
||||
We'll need enough information to verify the bug and make a patch. It will speed things up if you suggest a working patch, such as a code diff, and explain why and how it works. Reports that are not actionable, do not contain enough information, are too pushy/demanding, or are not able to convince us that it is a viable and practical attack on the web server itself may be deferred to a later time or possibly ignored, resources permitting. Priority will be given to credible, responsible reports that are constructive, specific, and actionable. Thank you for understanding.
|
||||
First please ensure your report falls within the accepted scope of security bugs (above).
|
||||
|
||||
We'll need enough information to verify the bug and make a patch. To speed things up, please include:
|
||||
|
||||
- Most minimal possible config (without redactions!)
|
||||
- Command(s)
|
||||
- Precise HTTP requests (`curl -v` and its output please)
|
||||
- Full log output (please enable debug mode)
|
||||
- Specific minimal steps to reproduce the issue from scratch
|
||||
- A working patch
|
||||
|
||||
Please DO NOT use containers, VMs, cloud instances or services, or any other complex infrastructure in your steps. Always prefer `curl -v` instead of web browsers.
|
||||
|
||||
We consider publicly-registered domain names to be public information. This necessary in order to maintain the integrity of certificate transparency, public DNS, and other public trust systems. Do not redact domain names from your reports. The actual content of your domain name affects Caddy's behavior, so we need the exact domain name(s) to reproduce with, or your report will be ignored.
|
||||
|
||||
It will speed things up if you suggest a working patch, such as a code diff, and explain why and how it works. Reports that are not actionable, do not contain enough information, are too pushy/demanding, or are not able to convince us that it is a viable and practical attack on the web server itself may be deferred to a later time or possibly ignored, depending on available resources. Priority will be given to credible, responsible reports that are constructive, specific, and actionable. (We get a lot of invalid reports.) Thank you for understanding.
|
||||
|
||||
When you are ready, please email Matt Holt (the author) directly: matt at dyanim dot com.
|
||||
|
||||
Please don't encrypt the email body. It only makes the process more complicated.
|
||||
|
||||
Please also understand that due to our nature as an open source project, we do not have a budget to award security bounties. We can only thank you.
|
||||
|
||||
If your report is valid and a patch is released, we will not reveal your identity by default. If you wish to be credited, please give us the name to use.
|
||||
If your report is valid and a patch is released, we will not reveal your identity by default. If you wish to be credited, please give us the name to use and/or your GitHub username. If you don't provide this we can't credit you.
|
||||
|
||||
Thanks for responsibly helping Caddy—and thousands of websites—be more secure!
|
||||
|
|
7
.github/dependabot.yml
vendored
Normal file
7
.github/dependabot.yml
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "monthly"
|
169
.github/workflows/ci.yml
vendored
169
.github/workflows/ci.yml
vendored
|
@ -1,14 +1,16 @@
|
|||
# Used as inspiration: https://github.com/mvdan/github-actions-golang
|
||||
|
||||
name: Cross-Platform
|
||||
name: Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
branches:
|
||||
- master
|
||||
- 2.*
|
||||
pull_request:
|
||||
branches:
|
||||
branches:
|
||||
- master
|
||||
- 2.*
|
||||
|
||||
jobs:
|
||||
test:
|
||||
|
@ -16,35 +18,53 @@ jobs:
|
|||
# Default is true, cancels jobs for other platforms in the matrix if one fails
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [ ubuntu-latest, macos-latest, windows-latest ]
|
||||
go-version: [ 1.14.x ]
|
||||
os:
|
||||
- linux
|
||||
- mac
|
||||
- windows
|
||||
go:
|
||||
- '1.22'
|
||||
- '1.23'
|
||||
|
||||
include:
|
||||
# Set the minimum Go patch version for the given Go minor
|
||||
# Usable via ${{ matrix.GO_SEMVER }}
|
||||
- go: '1.22'
|
||||
GO_SEMVER: '~1.22.3'
|
||||
|
||||
- go: '1.23'
|
||||
GO_SEMVER: '~1.23.0'
|
||||
|
||||
# Set some variables per OS, usable via ${{ matrix.VAR }}
|
||||
# OS_LABEL: the VM label from GitHub Actions (see https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories)
|
||||
# CADDY_BIN_PATH: the path to the compiled Caddy binary, for artifact publishing
|
||||
# SUCCESS: the typical value for $? per OS (Windows/pwsh returns 'True')
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
- os: linux
|
||||
OS_LABEL: ubuntu-latest
|
||||
CADDY_BIN_PATH: ./cmd/caddy/caddy
|
||||
SUCCESS: 0
|
||||
|
||||
- os: macos-latest
|
||||
- os: mac
|
||||
OS_LABEL: macos-14
|
||||
CADDY_BIN_PATH: ./cmd/caddy/caddy
|
||||
SUCCESS: 0
|
||||
|
||||
- os: windows-latest
|
||||
- os: windows
|
||||
OS_LABEL: windows-latest
|
||||
CADDY_BIN_PATH: ./cmd/caddy/caddy.exe
|
||||
SUCCESS: 'True'
|
||||
|
||||
runs-on: ${{ matrix.os }}
|
||||
runs-on: ${{ matrix.OS_LABEL }}
|
||||
|
||||
steps:
|
||||
- name: Install Go
|
||||
uses: actions/setup-go@v1
|
||||
with:
|
||||
go-version: ${{ matrix.go-version }}
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Install Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: ${{ matrix.GO_SEMVER }}
|
||||
check-latest: true
|
||||
|
||||
# These tools would be useful if we later decide to reinvestigate
|
||||
# publishing test/coverage reports to some tool for easier consumption
|
||||
|
@ -53,10 +73,11 @@ jobs:
|
|||
# go get github.com/axw/gocov/gocov
|
||||
# go get github.com/AlekSi/gocov-xml
|
||||
# go get -u github.com/jstemmer/go-junit-report
|
||||
# echo "::add-path::$(go env GOPATH)/bin"
|
||||
# echo "$(go env GOPATH)/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: Print Go version and environment
|
||||
id: vars
|
||||
shell: bash
|
||||
run: |
|
||||
printf "Using go at: $(which go)\n"
|
||||
printf "Go version: $(go version)\n"
|
||||
|
@ -64,17 +85,9 @@ jobs:
|
|||
go env
|
||||
printf "\n\nSystem environment:\n\n"
|
||||
env
|
||||
printf "Git version: $(git version)\n\n"
|
||||
# Calculate the short SHA1 hash of the git commit
|
||||
echo "::set-output name=short_sha::$(git rev-parse --short HEAD)"
|
||||
echo "::set-output name=go_cache::$(go env GOCACHE)"
|
||||
|
||||
- name: Cache the build cache
|
||||
uses: actions/cache@v1
|
||||
with:
|
||||
path: ${{ steps.vars.outputs.go_cache }}
|
||||
key: ${{ runner.os }}-go-ci-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-ci
|
||||
echo "short_sha=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get dependencies
|
||||
run: |
|
||||
|
@ -86,13 +99,20 @@ jobs:
|
|||
env:
|
||||
CGO_ENABLED: 0
|
||||
run: |
|
||||
go build -trimpath -ldflags="-w -s" -v
|
||||
go build -tags nobadger -trimpath -ldflags="-w -s" -v
|
||||
|
||||
- name: Smoke test Caddy
|
||||
working-directory: ./cmd/caddy
|
||||
run: |
|
||||
./caddy start
|
||||
./caddy stop
|
||||
|
||||
- name: Publish Build Artifact
|
||||
uses: actions/upload-artifact@v1
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: caddy_v2_${{ runner.os }}_${{ steps.vars.outputs.short_sha }}
|
||||
name: caddy_${{ runner.os }}_go${{ matrix.go }}_${{ steps.vars.outputs.short_sha }}
|
||||
path: ${{ matrix.CADDY_BIN_PATH }}
|
||||
compression-level: 0
|
||||
|
||||
# Commented bits below were useful to allow the job to continue
|
||||
# even if the tests fail, so we can publish the report separately
|
||||
|
@ -102,8 +122,8 @@ jobs:
|
|||
# continue-on-error: true
|
||||
run: |
|
||||
# (go test -v -coverprofile=cover-profile.out -race ./... 2>&1) > test-results/test-result.out
|
||||
go test -v -coverprofile="cover-profile.out" -short -race ./...
|
||||
# echo "::set-output name=status::$?"
|
||||
go test -tags nobadger -v -coverprofile="cover-profile.out" -short -race ./...
|
||||
# echo "status=$?" >> $GITHUB_OUTPUT
|
||||
|
||||
# Relevant step if we reinvestigate publishing test/coverage reports
|
||||
# - name: Prepare coverage reports
|
||||
|
@ -115,21 +135,86 @@ jobs:
|
|||
|
||||
# To return the correct result even though we set 'continue-on-error: true'
|
||||
# - name: Coerce correct build result
|
||||
# if: matrix.os != 'windows-latest' && steps.step_test.outputs.status != ${{ matrix.SUCCESS }}
|
||||
# if: matrix.os != 'windows' && steps.step_test.outputs.status != ${{ matrix.SUCCESS }}
|
||||
# run: |
|
||||
# echo "step_test ${{ steps.step_test.outputs.status }}\n"
|
||||
# exit 1
|
||||
|
||||
# From https://github.com/reviewdog/action-golangci-lint
|
||||
golangci-lint:
|
||||
name: runner / golangci-lint
|
||||
s390x-test:
|
||||
name: test (s390x on IBM Z)
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.pull_request.head.repo.full_name == 'caddyserver/caddy' && github.actor != 'dependabot[bot]'
|
||||
continue-on-error: true # August 2020: s390x VM is down due to weather and power issues
|
||||
steps:
|
||||
- name: Checkout code into the Go module directory
|
||||
uses: actions/checkout@v2
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
- name: Run Tests
|
||||
run: |
|
||||
set +e
|
||||
mkdir -p ~/.ssh && echo -e "${SSH_KEY//_/\\n}" > ~/.ssh/id_ecdsa && chmod og-rwx ~/.ssh/id_ecdsa
|
||||
|
||||
- name: Run golangci-lint
|
||||
uses: reviewdog/action-golangci-lint@v1
|
||||
# uses: docker://reviewdog/action-golangci-lint:v1 # pre-build docker image
|
||||
# short sha is enough?
|
||||
short_sha=$(git rev-parse --short HEAD)
|
||||
|
||||
# To shorten the following lines
|
||||
ssh_opts="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
|
||||
ssh_host="$CI_USER@ci-s390x.caddyserver.com"
|
||||
|
||||
# The environment is fresh, so there's no point in keeping accepting and adding the key.
|
||||
rsync -arz -e "ssh $ssh_opts" --progress --delete --exclude '.git' . "$ssh_host":/var/tmp/"$short_sha"
|
||||
ssh $ssh_opts -t "$ssh_host" bash <<EOF
|
||||
cd /var/tmp/$short_sha
|
||||
go version
|
||||
go env
|
||||
printf "\n\n"
|
||||
retries=3
|
||||
exit_code=0
|
||||
while ((retries > 0)); do
|
||||
CGO_ENABLED=0 go test -p 1 -tags nobadger -v ./...
|
||||
exit_code=$?
|
||||
if ((exit_code == 0)); then
|
||||
break
|
||||
fi
|
||||
echo "\n\nTest failed: \$exit_code, retrying..."
|
||||
((retries--))
|
||||
done
|
||||
echo "Remote exit code: \$exit_code"
|
||||
exit \$exit_code
|
||||
EOF
|
||||
test_result=$?
|
||||
|
||||
# There's no need leaving the files around
|
||||
ssh $ssh_opts "$ssh_host" "rm -rf /var/tmp/'$short_sha'"
|
||||
|
||||
echo "Test exit code: $test_result"
|
||||
exit $test_result
|
||||
env:
|
||||
SSH_KEY: ${{ secrets.S390X_SSH_KEY }}
|
||||
CI_USER: ${{ secrets.CI_USER }}
|
||||
|
||||
goreleaser-check:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.pull_request.head.repo.full_name == 'caddyserver/caddy' && github.actor != 'dependabot[bot]'
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- uses: goreleaser/goreleaser-action@v6
|
||||
with:
|
||||
github_token: ${{ secrets.github_token }}
|
||||
version: latest
|
||||
args: check
|
||||
- name: Install Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: "~1.23"
|
||||
check-latest: true
|
||||
- name: Install xcaddy
|
||||
run: |
|
||||
go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest
|
||||
xcaddy version
|
||||
- uses: goreleaser/goreleaser-action@v6
|
||||
with:
|
||||
version: latest
|
||||
args: build --single-target --snapshot
|
||||
env:
|
||||
TAG: ${{ github.head_ref || github.ref_name }}
|
||||
|
|
73
.github/workflows/cross-build.yml
vendored
Normal file
73
.github/workflows/cross-build.yml
vendored
Normal file
|
@ -0,0 +1,73 @@
|
|||
name: Cross-Build
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
- 2.*
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
- 2.*
|
||||
|
||||
jobs:
|
||||
build:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
goos:
|
||||
- 'aix'
|
||||
- 'linux'
|
||||
- 'solaris'
|
||||
- 'illumos'
|
||||
- 'dragonfly'
|
||||
- 'freebsd'
|
||||
- 'openbsd'
|
||||
- 'windows'
|
||||
- 'darwin'
|
||||
- 'netbsd'
|
||||
go:
|
||||
- '1.22'
|
||||
- '1.23'
|
||||
|
||||
include:
|
||||
# Set the minimum Go patch version for the given Go minor
|
||||
# Usable via ${{ matrix.GO_SEMVER }}
|
||||
- go: '1.22'
|
||||
GO_SEMVER: '~1.22.3'
|
||||
|
||||
- go: '1.23'
|
||||
GO_SEMVER: '~1.23.0'
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
continue-on-error: true
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Install Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: ${{ matrix.GO_SEMVER }}
|
||||
check-latest: true
|
||||
|
||||
- name: Print Go version and environment
|
||||
id: vars
|
||||
run: |
|
||||
printf "Using go at: $(which go)\n"
|
||||
printf "Go version: $(go version)\n"
|
||||
printf "\n\nGo environment:\n\n"
|
||||
go env
|
||||
printf "\n\nSystem environment:\n\n"
|
||||
env
|
||||
|
||||
- name: Run Build
|
||||
env:
|
||||
CGO_ENABLED: 0
|
||||
GOOS: ${{ matrix.goos }}
|
||||
GOARCH: ${{ matrix.goos == 'aix' && 'ppc64' || 'amd64' }}
|
||||
shell: bash
|
||||
continue-on-error: true
|
||||
working-directory: ./cmd/caddy
|
||||
run: |
|
||||
GOOS=$GOOS GOARCH=$GOARCH go build -tags=nobadger,nomysql,nopgx -trimpath -o caddy-"$GOOS"-$GOARCH 2> /dev/null
|
73
.github/workflows/fuzzing.yml
vendored
73
.github/workflows/fuzzing.yml
vendored
|
@ -1,73 +0,0 @@
|
|||
name: Fuzzing
|
||||
|
||||
on:
|
||||
# Daily midnight fuzzing
|
||||
schedule:
|
||||
- cron: '0 0 * * *'
|
||||
|
||||
jobs:
|
||||
fuzzing:
|
||||
name: Fuzzing
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ ubuntu-latest ]
|
||||
go-version: [ 1.14.x ]
|
||||
runs-on: ${{ matrix.os }}
|
||||
|
||||
steps:
|
||||
- name: Install Go
|
||||
uses: actions/setup-go@v1
|
||||
with:
|
||||
go-version: ${{ matrix.go-version }}
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Download go-fuzz tools and the Fuzzit CLI, move Fuzzit CLI to GOBIN
|
||||
# If we decide we need to prevent this from running on forks, we can use this line:
|
||||
# if: github.repository == 'caddyserver/caddy'
|
||||
run: |
|
||||
|
||||
go get -v github.com/dvyukov/go-fuzz/go-fuzz github.com/dvyukov/go-fuzz/go-fuzz-build
|
||||
wget -q -O fuzzit https://github.com/fuzzitdev/fuzzit/releases/download/v2.4.77/fuzzit_Linux_x86_64
|
||||
chmod a+x fuzzit
|
||||
mv fuzzit $(go env GOPATH)/bin
|
||||
echo "::add-path::$(go env GOPATH)/bin"
|
||||
|
||||
- name: Generate fuzzers & submit them to Fuzzit
|
||||
continue-on-error: true
|
||||
env:
|
||||
FUZZIT_API_KEY: ${{ secrets.FUZZIT_API_KEY }}
|
||||
SYSTEM_PULLREQUEST_SOURCEBRANCH: ${{ github.ref }}
|
||||
BUILD_SOURCEVERSION: ${{ github.sha }}
|
||||
run: |
|
||||
# debug
|
||||
echo "PR Source Branch: $SYSTEM_PULLREQUEST_SOURCEBRANCH"
|
||||
echo "Source version: $BUILD_SOURCEVERSION"
|
||||
|
||||
declare -A fuzzers_funcs=(\
|
||||
["./caddyconfig/httpcaddyfile/addresses_fuzz.go"]="FuzzParseAddress" \
|
||||
["./listeners_fuzz.go"]="FuzzParseNetworkAddress" \
|
||||
["./replacer_fuzz.go"]="FuzzReplacer" \
|
||||
)
|
||||
|
||||
declare -A fuzzers_targets=(\
|
||||
["./caddyconfig/httpcaddyfile/addresses_fuzz.go"]="parse-address" \
|
||||
["./listeners_fuzz.go"]="parse-network-address" \
|
||||
["./replacer_fuzz.go"]="replacer" \
|
||||
)
|
||||
|
||||
fuzz_type="fuzzing"
|
||||
|
||||
for f in $(find . -name \*_fuzz.go); do
|
||||
FUZZER_DIRECTORY=$(dirname "$f")
|
||||
|
||||
echo "go-fuzz-build func ${fuzzers_funcs[$f]} residing in $f"
|
||||
|
||||
go-fuzz-build -func "${fuzzers_funcs[$f]}" -o "$FUZZER_DIRECTORY/${fuzzers_targets[$f]}.zip" "$FUZZER_DIRECTORY"
|
||||
|
||||
fuzzit create job --engine go-fuzz caddyserver/"${fuzzers_targets[$f]}" "$FUZZER_DIRECTORY"/"${fuzzers_targets[$f]}.zip" --api-key "${FUZZIT_API_KEY}" --type "${fuzz_type}" --branch "${SYSTEM_PULLREQUEST_SOURCEBRANCH}" --revision "${BUILD_SOURCEVERSION}"
|
||||
|
||||
echo "Completed $f"
|
||||
done
|
67
.github/workflows/lint.yml
vendored
Normal file
67
.github/workflows/lint.yml
vendored
Normal file
|
@ -0,0 +1,67 @@
|
|||
name: Lint
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
- 2.*
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
- 2.*
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
# From https://github.com/golangci/golangci-lint-action
|
||||
golangci:
|
||||
permissions:
|
||||
contents: read # for actions/checkout to fetch code
|
||||
pull-requests: read # for golangci/golangci-lint-action to fetch pull requests
|
||||
name: lint
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- linux
|
||||
- mac
|
||||
- windows
|
||||
|
||||
include:
|
||||
- os: linux
|
||||
OS_LABEL: ubuntu-latest
|
||||
|
||||
- os: mac
|
||||
OS_LABEL: macos-14
|
||||
|
||||
- os: windows
|
||||
OS_LABEL: windows-latest
|
||||
|
||||
runs-on: ${{ matrix.OS_LABEL }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '~1.23'
|
||||
check-latest: true
|
||||
|
||||
- name: golangci-lint
|
||||
uses: golangci/golangci-lint-action@v6
|
||||
with:
|
||||
version: latest
|
||||
|
||||
# Windows times out frequently after about 5m50s if we don't set a longer timeout.
|
||||
args: --timeout 10m
|
||||
|
||||
# Optional: show only new issues if it's a pull request. The default value is `false`.
|
||||
# only-new-issues: true
|
||||
|
||||
govulncheck:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: govulncheck
|
||||
uses: golang/govulncheck-action@v1
|
||||
with:
|
||||
go-version-input: '~1.23.0'
|
||||
check-latest: true
|
158
.github/workflows/release.yml
vendored
158
.github/workflows/release.yml
vendored
|
@ -10,22 +10,48 @@ jobs:
|
|||
name: Release
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ ubuntu-latest ]
|
||||
go-version: [ 1.14.x ]
|
||||
os:
|
||||
- ubuntu-latest
|
||||
go:
|
||||
- '1.23'
|
||||
|
||||
include:
|
||||
# Set the minimum Go patch version for the given Go minor
|
||||
# Usable via ${{ matrix.GO_SEMVER }}
|
||||
- go: '1.23'
|
||||
GO_SEMVER: '~1.23.0'
|
||||
|
||||
runs-on: ${{ matrix.os }}
|
||||
# https://github.com/sigstore/cosign/issues/1258#issuecomment-1002251233
|
||||
# https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#adding-permissions-settings
|
||||
permissions:
|
||||
id-token: write
|
||||
# https://docs.github.com/en/rest/overview/permissions-required-for-github-apps#permission-on-contents
|
||||
# "Releases" is part of `contents`, so it needs the `write`
|
||||
contents: write
|
||||
|
||||
steps:
|
||||
- name: Install Go
|
||||
uses: actions/setup-go@v1
|
||||
with:
|
||||
go-version: ${{ matrix.go-version }}
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
# So GoReleaser can generate the changelog properly
|
||||
- name: Unshallowify the repo clone
|
||||
run: git fetch --prune --unshallow
|
||||
- name: Install Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: ${{ matrix.GO_SEMVER }}
|
||||
check-latest: true
|
||||
|
||||
# Force fetch upstream tags -- because 65 minutes
|
||||
# tl;dr: actions/checkout@v4 runs this line:
|
||||
# git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules --depth=1 origin +ebc278ec98bb24f2852b61fde2a9bf2e3d83818b:refs/tags/
|
||||
# which makes its own local lightweight tag, losing all the annotations in the process. Our earlier script ran:
|
||||
# git fetch --prune --unshallow
|
||||
# which doesn't overwrite that tag because that would be destructive.
|
||||
# Credit to @francislavoie for the investigation.
|
||||
# https://github.com/actions/checkout/issues/290#issuecomment-680260080
|
||||
- name: Force fetch upstream tags
|
||||
run: git fetch --tags --force
|
||||
|
||||
# https://github.community/t5/GitHub-Actions/How-to-get-just-the-tag-name/m-p/32167/highlight/true#M1027
|
||||
- name: Print Go version and environment
|
||||
|
@ -37,32 +63,116 @@ jobs:
|
|||
go env
|
||||
printf "\n\nSystem environment:\n\n"
|
||||
env
|
||||
echo "::set-output name=version_tag::${GITHUB_REF/refs\/tags\//}"
|
||||
echo "::set-output name=short_sha::$(git rev-parse --short HEAD)"
|
||||
echo "::set-output name=go_cache::$(go env GOCACHE)"
|
||||
echo "version_tag=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
|
||||
echo "short_sha=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Cache the build cache
|
||||
uses: actions/cache@v1
|
||||
with:
|
||||
path: ${{ steps.vars.outputs.go_cache }}
|
||||
key: ${{ runner.os }}-go-release-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-release
|
||||
# Add "pip install" CLI tools to PATH
|
||||
echo ~/.local/bin >> $GITHUB_PATH
|
||||
|
||||
# Parse semver
|
||||
TAG=${GITHUB_REF/refs\/tags\//}
|
||||
SEMVER_RE='[^0-9]*\([0-9]*\)[.]\([0-9]*\)[.]\([0-9]*\)\([0-9A-Za-z\.-]*\)'
|
||||
TAG_MAJOR=`echo ${TAG#v} | sed -e "s#$SEMVER_RE#\1#"`
|
||||
TAG_MINOR=`echo ${TAG#v} | sed -e "s#$SEMVER_RE#\2#"`
|
||||
TAG_PATCH=`echo ${TAG#v} | sed -e "s#$SEMVER_RE#\3#"`
|
||||
TAG_SPECIAL=`echo ${TAG#v} | sed -e "s#$SEMVER_RE#\4#"`
|
||||
echo "tag_major=${TAG_MAJOR}" >> $GITHUB_OUTPUT
|
||||
echo "tag_minor=${TAG_MINOR}" >> $GITHUB_OUTPUT
|
||||
echo "tag_patch=${TAG_PATCH}" >> $GITHUB_OUTPUT
|
||||
echo "tag_special=${TAG_SPECIAL}" >> $GITHUB_OUTPUT
|
||||
|
||||
# Cloudsmith CLI tooling for pushing releases
|
||||
# See https://help.cloudsmith.io/docs/cli
|
||||
- name: Install Cloudsmith CLI
|
||||
run: pip install --upgrade cloudsmith-cli
|
||||
|
||||
- name: Validate commits and tag signatures
|
||||
run: |
|
||||
|
||||
# Import Matt Holt's key
|
||||
curl 'https://github.com/mholt.gpg' | gpg --import
|
||||
|
||||
echo "Verifying the tag: ${{ steps.vars.outputs.version_tag }}"
|
||||
# tags are only accepted if signed by Matt's key
|
||||
git verify-tag "${{ steps.vars.outputs.version_tag }}" || exit 1
|
||||
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@main
|
||||
- name: Cosign version
|
||||
run: cosign version
|
||||
- name: Install Syft
|
||||
uses: anchore/sbom-action/download-syft@main
|
||||
- name: Syft version
|
||||
run: syft version
|
||||
- name: Install xcaddy
|
||||
run: |
|
||||
go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest
|
||||
xcaddy version
|
||||
# GoReleaser will take care of publishing those artifacts into the release
|
||||
- name: Run GoReleaser
|
||||
uses: goreleaser/goreleaser-action@v1
|
||||
uses: goreleaser/goreleaser-action@v6
|
||||
with:
|
||||
version: latest
|
||||
args: release --rm-dist
|
||||
args: release --clean --timeout 60m
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
TAG: ${{ steps.vars.outputs.version_tag }}
|
||||
COSIGN_EXPERIMENTAL: 1
|
||||
|
||||
# Only publish on non-special tags (e.g. non-beta)
|
||||
# We will continue to push to Gemfury for the foreseeable future, although
|
||||
# Cloudsmith is probably better, to not break things for existing users of Gemfury.
|
||||
# See https://gemfury.com/caddy/deb:caddy
|
||||
- name: Publish .deb to Gemfury
|
||||
if: ${{ steps.vars.outputs.tag_special == '' }}
|
||||
env:
|
||||
GEMFURY_PUSH_TOKEN: ${{ secrets.GEMFURY_PUSH_TOKEN }}
|
||||
run: |
|
||||
for filename in dist/*.deb; do
|
||||
# armv6 and armv7 are both "armhf" so we can skip the duplicate
|
||||
if [[ "$filename" == *"armv6"* ]]; then
|
||||
echo "Skipping $filename"
|
||||
continue
|
||||
fi
|
||||
|
||||
curl -F package=@"$filename" https://${GEMFURY_PUSH_TOKEN}:@push.fury.io/caddy/
|
||||
done
|
||||
done
|
||||
|
||||
# Publish only special tags (unstable/beta/rc) to the "testing" repo
|
||||
# See https://cloudsmith.io/~caddy/repos/testing/
|
||||
- name: Publish .deb to Cloudsmith (special tags)
|
||||
if: ${{ steps.vars.outputs.tag_special != '' }}
|
||||
env:
|
||||
CLOUDSMITH_API_KEY: ${{ secrets.CLOUDSMITH_API_KEY }}
|
||||
run: |
|
||||
for filename in dist/*.deb; do
|
||||
# armv6 and armv7 are both "armhf" so we can skip the duplicate
|
||||
if [[ "$filename" == *"armv6"* ]]; then
|
||||
echo "Skipping $filename"
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "Pushing $filename to 'testing'"
|
||||
cloudsmith push deb caddy/testing/any-distro/any-version $filename
|
||||
done
|
||||
|
||||
# Publish stable tags to Cloudsmith to both repos, "stable" and "testing"
|
||||
# See https://cloudsmith.io/~caddy/repos/stable/
|
||||
- name: Publish .deb to Cloudsmith (stable tags)
|
||||
if: ${{ steps.vars.outputs.tag_special == '' }}
|
||||
env:
|
||||
CLOUDSMITH_API_KEY: ${{ secrets.CLOUDSMITH_API_KEY }}
|
||||
run: |
|
||||
for filename in dist/*.deb; do
|
||||
# armv6 and armv7 are both "armhf" so we can skip the duplicate
|
||||
if [[ "$filename" == *"armv6"* ]]; then
|
||||
echo "Skipping $filename"
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "Pushing $filename to 'stable'"
|
||||
cloudsmith push deb caddy/stable/any-distro/any-version $filename
|
||||
|
||||
echo "Pushing $filename to 'testing'"
|
||||
cloudsmith push deb caddy/testing/any-distro/any-version $filename
|
||||
done
|
||||
|
|
35
.github/workflows/release_published.yml
vendored
Normal file
35
.github/workflows/release_published.yml
vendored
Normal file
|
@ -0,0 +1,35 @@
|
|||
name: Release Published
|
||||
|
||||
# Event payload: https://developer.github.com/webhooks/event-payloads/#release
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
jobs:
|
||||
release:
|
||||
name: Release Published
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-latest
|
||||
runs-on: ${{ matrix.os }}
|
||||
|
||||
steps:
|
||||
|
||||
# See https://github.com/peter-evans/repository-dispatch
|
||||
- name: Trigger event on caddyserver/dist
|
||||
uses: peter-evans/repository-dispatch@v3
|
||||
with:
|
||||
token: ${{ secrets.REPO_DISPATCH_TOKEN }}
|
||||
repository: caddyserver/dist
|
||||
event-type: release-tagged
|
||||
client-payload: '{"tag": "${{ github.event.release.tag_name }}"}'
|
||||
|
||||
- name: Trigger event on caddyserver/caddy-docker
|
||||
uses: peter-evans/repository-dispatch@v3
|
||||
with:
|
||||
token: ${{ secrets.REPO_DISPATCH_TOKEN }}
|
||||
repository: caddyserver/caddy-docker
|
||||
event-type: release-tagged
|
||||
client-payload: '{"tag": "${{ github.event.release.tag_name }}"}'
|
||||
|
12
.gitignore
vendored
12
.gitignore
vendored
|
@ -1,15 +1,19 @@
|
|||
_gitignore/
|
||||
*.log
|
||||
Caddyfile
|
||||
Caddyfile.*
|
||||
!caddyfile/
|
||||
!caddyfile.go
|
||||
|
||||
# artifacts from pprof tooling
|
||||
*.prof
|
||||
*.test
|
||||
|
||||
# build artifacts
|
||||
# build artifacts and helpers
|
||||
cmd/caddy/caddy
|
||||
cmd/caddy/caddy.exe
|
||||
cmd/caddy/tmp/*.exe
|
||||
cmd/caddy/.env
|
||||
|
||||
# mac specific
|
||||
.DS_Store
|
||||
|
@ -20,4 +24,8 @@ vendor
|
|||
# goreleaser artifacts
|
||||
dist
|
||||
caddy-build
|
||||
caddy-dist
|
||||
caddy-dist
|
||||
|
||||
# IDE files
|
||||
.idea/
|
||||
.vscode/
|
||||
|
|
155
.golangci.yml
155
.golangci.yml
|
@ -1,51 +1,182 @@
|
|||
linters-settings:
|
||||
errcheck:
|
||||
ignore: fmt:.*,io/ioutil:^Read.*,github.com/caddyserver/caddy/v2/caddyconfig:RegisterAdapter,github.com/caddyserver/caddy/v2:RegisterModule
|
||||
ignoretests: true
|
||||
misspell:
|
||||
locale: US
|
||||
exclude-functions:
|
||||
- fmt.*
|
||||
- (go.uber.org/zap/zapcore.ObjectEncoder).AddObject
|
||||
- (go.uber.org/zap/zapcore.ObjectEncoder).AddArray
|
||||
gci:
|
||||
sections:
|
||||
- standard # Standard section: captures all standard packages.
|
||||
- default # Default section: contains all imports that could not be matched to another section type.
|
||||
- prefix(github.com/caddyserver/caddy/v2/cmd) # ensure that this is always at the top and always has a line break.
|
||||
- prefix(github.com/caddyserver/caddy) # Custom section: groups all imports with the specified Prefix.
|
||||
# Skip generated files.
|
||||
# Default: true
|
||||
skip-generated: true
|
||||
# Enable custom order of sections.
|
||||
# If `true`, make the section order the same as the order of `sections`.
|
||||
# Default: false
|
||||
custom-order: true
|
||||
exhaustive:
|
||||
ignore-enum-types: reflect.Kind|svc.Cmd
|
||||
|
||||
linters:
|
||||
disable-all: true
|
||||
enable:
|
||||
- asasalint
|
||||
- asciicheck
|
||||
- bidichk
|
||||
- bodyclose
|
||||
- prealloc
|
||||
- unconvert
|
||||
- decorder
|
||||
- dogsled
|
||||
- dupl
|
||||
- dupword
|
||||
- durationcheck
|
||||
- errcheck
|
||||
- errname
|
||||
- exhaustive
|
||||
- gci
|
||||
- gofmt
|
||||
- goimports
|
||||
- gofumpt
|
||||
- gosec
|
||||
- gosimple
|
||||
- govet
|
||||
- ineffassign
|
||||
- importas
|
||||
- misspell
|
||||
- prealloc
|
||||
- promlinter
|
||||
- sloglint
|
||||
- sqlclosecheck
|
||||
- staticcheck
|
||||
- tenv
|
||||
- testableexamples
|
||||
- testifylint
|
||||
- tparallel
|
||||
- typecheck
|
||||
- unconvert
|
||||
- unused
|
||||
- wastedassign
|
||||
- whitespace
|
||||
- zerologlint
|
||||
# these are implicitly disabled:
|
||||
# - containedctx
|
||||
# - contextcheck
|
||||
# - cyclop
|
||||
# - depguard
|
||||
# - errchkjson
|
||||
# - errorlint
|
||||
# - exhaustruct
|
||||
# - execinquery
|
||||
# - exhaustruct
|
||||
# - forbidigo
|
||||
# - forcetypeassert
|
||||
# - funlen
|
||||
# - ginkgolinter
|
||||
# - gocheckcompilerdirectives
|
||||
# - gochecknoglobals
|
||||
# - gochecknoinits
|
||||
# - gochecksumtype
|
||||
# - gocognit
|
||||
# - goconst
|
||||
# - gocritic
|
||||
# - gocyclo
|
||||
# - godot
|
||||
# - godox
|
||||
# - goerr113
|
||||
# - goheader
|
||||
# - gomnd
|
||||
# - gomoddirectives
|
||||
# - gomodguard
|
||||
# - goprintffuncname
|
||||
# - gosmopolitan
|
||||
# - grouper
|
||||
# - inamedparam
|
||||
# - interfacebloat
|
||||
# - ireturn
|
||||
# - lll
|
||||
# - loggercheck
|
||||
# - maintidx
|
||||
# - makezero
|
||||
# - mirror
|
||||
# - musttag
|
||||
# - nakedret
|
||||
# - nestif
|
||||
# - nilerr
|
||||
# - nilnil
|
||||
# - nlreturn
|
||||
# - noctx
|
||||
# - nolintlint
|
||||
# - nonamedreturns
|
||||
# - nosprintfhostport
|
||||
# - paralleltest
|
||||
# - perfsprint
|
||||
# - predeclared
|
||||
# - protogetter
|
||||
# - reassign
|
||||
# - revive
|
||||
# - rowserrcheck
|
||||
# - stylecheck
|
||||
# - tagalign
|
||||
# - tagliatelle
|
||||
# - testpackage
|
||||
# - thelper
|
||||
# - unparam
|
||||
# - usestdlibvars
|
||||
# - varnamelen
|
||||
# - wrapcheck
|
||||
# - wsl
|
||||
|
||||
run:
|
||||
# default concurrency is a available CPU number.
|
||||
# concurrency: 4 # explicitly omit this value to fully utilize available resources.
|
||||
deadline: 5m
|
||||
timeout: 5m
|
||||
issues-exit-code: 1
|
||||
tests: false
|
||||
|
||||
# output configuration options
|
||||
output:
|
||||
format: 'colored-line-number'
|
||||
formats:
|
||||
- format: 'colored-line-number'
|
||||
print-issued-lines: true
|
||||
print-linter-name: true
|
||||
|
||||
issues:
|
||||
exclude-rules:
|
||||
- text: 'G115' # TODO: Either we should fix the issues or nuke the linter if it's bad
|
||||
linters:
|
||||
- gosec
|
||||
# we aren't calling unknown URL
|
||||
- text: "G107" # G107: Url provided to HTTP request as taint input
|
||||
- text: 'G107' # G107: Url provided to HTTP request as taint input
|
||||
linters:
|
||||
- gosec
|
||||
# as a web server that's expected to handle any template, this is totally in the hands of the user.
|
||||
- text: "G203" # G203: Use of unescaped data in HTML templates
|
||||
- text: 'G203' # G203: Use of unescaped data in HTML templates
|
||||
linters:
|
||||
- gosec
|
||||
# we're shelling out to known commands, not relying on user-defined input.
|
||||
- text: "G204" # G204: Audit use of command execution
|
||||
- text: 'G204' # G204: Audit use of command execution
|
||||
linters:
|
||||
- gosec
|
||||
# the choice of weakrand is deliberate, hence the named import "weakrand"
|
||||
- path: modules/caddyhttp/reverseproxy/selectionpolicies.go
|
||||
text: "G404" # G404: Insecure random number source (rand)
|
||||
text: 'G404' # G404: Insecure random number source (rand)
|
||||
linters:
|
||||
- gosec
|
||||
- path: modules/caddyhttp/reverseproxy/streaming.go
|
||||
text: 'G404' # G404: Insecure random number source (rand)
|
||||
linters:
|
||||
- gosec
|
||||
- path: modules/logging/filters.go
|
||||
linters:
|
||||
- dupl
|
||||
- path: modules/caddyhttp/matchers.go
|
||||
linters:
|
||||
- dupl
|
||||
- path: modules/caddyhttp/vars.go
|
||||
linters:
|
||||
- dupl
|
||||
- path: _test\.go
|
||||
linters:
|
||||
- errcheck
|
||||
|
|
153
.goreleaser.yml
153
.goreleaser.yml
|
@ -1,20 +1,39 @@
|
|||
version: 2
|
||||
|
||||
before:
|
||||
hooks:
|
||||
# The build is done in this particular way to build Caddy in a designated directory named in .gitignore.
|
||||
# This is so we can run goreleaser on tag without Git complaining of being dirty. The main.go in cmd/caddy directory
|
||||
# cannot be built within that directory due to changes necessary for the build causing Git to be dirty, which
|
||||
# subsequently causes gorleaser to refuse running.
|
||||
- rm -rf caddy-build caddy-dist vendor
|
||||
# vendor Caddy deps
|
||||
- go mod vendor
|
||||
- mkdir -p caddy-build
|
||||
- cp cmd/caddy/main.go caddy-build/main.go
|
||||
- cp ./go.mod caddy-build/go.mod
|
||||
- sed -i.bkp 's|github.com/caddyserver/caddy/v2|caddy|g' ./caddy-build/go.mod
|
||||
- /bin/sh -c 'cd ./caddy-build && go mod init caddy'
|
||||
# prepare syso files for windows embedding
|
||||
- /bin/sh -c 'for a in amd64 arm arm64; do XCADDY_SKIP_BUILD=1 GOOS=windows GOARCH=$a xcaddy build {{.Env.TAG}}; done'
|
||||
- /bin/sh -c 'mv /tmp/buildenv_*/*.syso caddy-build'
|
||||
# GoReleaser doesn't seem to offer {{.Tag}} at this stage, so we have to embed it into the env
|
||||
# so we run: TAG=$(git describe --abbrev=0) goreleaser release --rm-dist --skip-publish --skip-validate
|
||||
- go mod edit -require=github.com/caddyserver/caddy/v2@{{.Env.TAG}} ./caddy-build/go.mod
|
||||
# as of Go 1.16, `go` commands no longer automatically change go.{mod,sum}. We now have to explicitly
|
||||
# run `go mod tidy`. The `/bin/sh -c '...'` is because goreleaser can't find cd in PATH without shell invocation.
|
||||
- /bin/sh -c 'cd ./caddy-build && go mod tidy'
|
||||
# vendor the deps of the prepared to-build module
|
||||
- /bin/sh -c 'cd ./caddy-build && go mod vendor'
|
||||
- git clone --depth 1 https://github.com/caddyserver/dist caddy-dist
|
||||
- mkdir -p caddy-dist/man
|
||||
- go mod download
|
||||
- go run cmd/caddy/main.go manpage --directory ./caddy-dist/man
|
||||
- gzip -r ./caddy-dist/man/
|
||||
- /bin/sh -c 'go run cmd/caddy/main.go completion bash > ./caddy-dist/scripts/bash-completion'
|
||||
|
||||
builds:
|
||||
- env:
|
||||
- CGO_ENABLED=0
|
||||
- GO111MODULE=on
|
||||
main: main.go
|
||||
dir: ./caddy-build
|
||||
binary: caddy
|
||||
goos:
|
||||
|
@ -26,23 +45,109 @@ builds:
|
|||
- amd64
|
||||
- arm
|
||||
- arm64
|
||||
- s390x
|
||||
- ppc64le
|
||||
- riscv64
|
||||
goarm:
|
||||
- 6
|
||||
- 7
|
||||
- "5"
|
||||
- "6"
|
||||
- "7"
|
||||
ignore:
|
||||
- goos: darwin
|
||||
goarch: arm
|
||||
- goos: darwin
|
||||
goarch: ppc64le
|
||||
- goos: darwin
|
||||
goarch: s390x
|
||||
- goos: darwin
|
||||
goarch: riscv64
|
||||
- goos: windows
|
||||
goarch: ppc64le
|
||||
- goos: windows
|
||||
goarch: s390x
|
||||
- goos: windows
|
||||
goarch: riscv64
|
||||
- goos: freebsd
|
||||
goarch: ppc64le
|
||||
- goos: freebsd
|
||||
goarch: s390x
|
||||
- goos: freebsd
|
||||
goarch: riscv64
|
||||
- goos: freebsd
|
||||
goarch: arm
|
||||
goarm: "5"
|
||||
flags:
|
||||
- -trimpath
|
||||
- -mod=readonly
|
||||
ldflags:
|
||||
- -s -w
|
||||
tags:
|
||||
- nobadger
|
||||
- nomysql
|
||||
- nopgx
|
||||
|
||||
signs:
|
||||
- cmd: cosign
|
||||
signature: "${artifact}.sig"
|
||||
certificate: '{{ trimsuffix (trimsuffix .Env.artifact ".zip") ".tar.gz" }}.pem'
|
||||
args: ["sign-blob", "--yes", "--output-signature=${signature}", "--output-certificate", "${certificate}", "${artifact}"]
|
||||
artifacts: all
|
||||
|
||||
sboms:
|
||||
- artifacts: binary
|
||||
documents:
|
||||
- >-
|
||||
{{ .ProjectName }}_
|
||||
{{- .Version }}_
|
||||
{{- if eq .Os "darwin" }}mac{{ else }}{{ .Os }}{{ end }}_
|
||||
{{- .Arch }}
|
||||
{{- with .Arm }}v{{ . }}{{ end }}
|
||||
{{- with .Mips }}_{{ . }}{{ end }}
|
||||
{{- if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}.sbom
|
||||
cmd: syft
|
||||
args: ["$artifact", "--file", "${document}", "--output", "cyclonedx-json"]
|
||||
|
||||
archives:
|
||||
- format_overrides:
|
||||
- id: default
|
||||
format_overrides:
|
||||
- goos: windows
|
||||
format: zip
|
||||
replacements:
|
||||
darwin: mac
|
||||
name_template: >-
|
||||
{{ .ProjectName }}_
|
||||
{{- .Version }}_
|
||||
{{- if eq .Os "darwin" }}mac{{ else }}{{ .Os }}{{ end }}_
|
||||
{{- .Arch }}
|
||||
{{- with .Arm }}v{{ . }}{{ end }}
|
||||
{{- with .Mips }}_{{ . }}{{ end }}
|
||||
{{- if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}
|
||||
|
||||
# package the 'caddy-build' directory into a tarball,
|
||||
# allowing users to build the exact same set of files as ours.
|
||||
- id: source
|
||||
meta: true
|
||||
name_template: "{{ .ProjectName }}_{{ .Version }}_buildable-artifact"
|
||||
files:
|
||||
- src: LICENSE
|
||||
dst: ./LICENSE
|
||||
- src: README.md
|
||||
dst: ./README.md
|
||||
- src: AUTHORS
|
||||
dst: ./AUTHORS
|
||||
- src: ./caddy-build
|
||||
dst: ./
|
||||
|
||||
source:
|
||||
enabled: true
|
||||
name_template: '{{ .ProjectName }}_{{ .Version }}_src'
|
||||
format: 'tar.gz'
|
||||
|
||||
# Additional files/template/globs you want to add to the source archive.
|
||||
#
|
||||
# Default: empty.
|
||||
files:
|
||||
- vendor
|
||||
|
||||
|
||||
checksum:
|
||||
algorithm: sha512
|
||||
|
||||
|
@ -50,11 +155,11 @@ nfpms:
|
|||
- id: default
|
||||
package_name: caddy
|
||||
|
||||
vendor: Light Code Labs
|
||||
vendor: Dyanim
|
||||
homepage: https://caddyserver.com
|
||||
maintainer: Matthew Holt <mholt@users.noreply.github.com>
|
||||
description: |
|
||||
Powerful, enterprise-ready, open source web server with automatic HTTPS written in Go
|
||||
Caddy - Powerful, enterprise-ready, open source web server with automatic HTTPS written in Go
|
||||
license: Apache 2.0
|
||||
|
||||
formats:
|
||||
|
@ -62,18 +167,33 @@ nfpms:
|
|||
# - rpm
|
||||
|
||||
bindir: /usr/bin
|
||||
files:
|
||||
./caddy-dist/init/caddy.service: /lib/systemd/system/caddy.service
|
||||
./caddy-dist/init/caddy-api.service: /lib/systemd/system/caddy-api.service
|
||||
./caddy-dist/welcome/index.html: /usr/share/caddy/index.html
|
||||
config_files:
|
||||
./caddy-dist/config/Caddyfile: /etc/caddy/Caddyfile
|
||||
contents:
|
||||
- src: ./caddy-dist/init/caddy.service
|
||||
dst: /lib/systemd/system/caddy.service
|
||||
|
||||
- src: ./caddy-dist/init/caddy-api.service
|
||||
dst: /lib/systemd/system/caddy-api.service
|
||||
|
||||
- src: ./caddy-dist/welcome/index.html
|
||||
dst: /usr/share/caddy/index.html
|
||||
|
||||
- src: ./caddy-dist/scripts/bash-completion
|
||||
dst: /etc/bash_completion.d/caddy
|
||||
|
||||
- src: ./caddy-dist/config/Caddyfile
|
||||
dst: /etc/caddy/Caddyfile
|
||||
type: config
|
||||
|
||||
- src: ./caddy-dist/man/*
|
||||
dst: /usr/share/man/man8/
|
||||
|
||||
scripts:
|
||||
postinstall: ./caddy-dist/scripts/postinstall.sh
|
||||
preremove: ./caddy-dist/scripts/preremove.sh
|
||||
postremove: ./caddy-dist/scripts/postremove.sh
|
||||
|
||||
provides:
|
||||
- httpd
|
||||
|
||||
release:
|
||||
github:
|
||||
|
@ -89,5 +209,6 @@ changelog:
|
|||
- '^chore:'
|
||||
- '^ci:'
|
||||
- '^docs?:'
|
||||
- '^readme:'
|
||||
- '^tests?:'
|
||||
- '^\w+\s+' # a hack to remove commit messages without colons thus don't correspond to a package
|
||||
|
|
96
README.md
96
README.md
|
@ -1,21 +1,31 @@
|
|||
<p align="center">
|
||||
<a href="https://caddyserver.com"><img src="https://user-images.githubusercontent.com/1128849/36338535-05fb646a-136f-11e8-987b-e6901e717d5a.png" alt="Caddy" width="450"></a>
|
||||
<a href="https://caddyserver.com">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://user-images.githubusercontent.com/1128849/210187358-e2c39003-9a5e-4dd5-a783-6deb6483ee72.svg">
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://user-images.githubusercontent.com/1128849/210187356-dfb7f1c5-ac2e-43aa-bb23-fc014280ae1f.svg">
|
||||
<img src="https://user-images.githubusercontent.com/1128849/210187356-dfb7f1c5-ac2e-43aa-bb23-fc014280ae1f.svg" alt="Caddy" width="550">
|
||||
</picture>
|
||||
</a>
|
||||
<br>
|
||||
<h3 align="center">a <a href="https://zerossl.com"><img src="https://user-images.githubusercontent.com/55066419/208327323-2770dc16-ec09-43a0-9035-c5b872c2ad7f.svg" height="28" style="vertical-align: -7.7px" valign="middle"></a> project</h3>
|
||||
</p>
|
||||
<hr>
|
||||
<h3 align="center">Every site on HTTPS</h3>
|
||||
<p align="center">Caddy is an extensible server platform that uses TLS by default.</p>
|
||||
<p align="center">
|
||||
<a href="https://github.com/caddyserver/caddy/actions?query=workflow%3ACross-Platform"><img src="https://github.com/caddyserver/caddy/workflows/Cross-Platform/badge.svg"></a>
|
||||
<a href="https://pkg.go.dev/github.com/caddyserver/caddy/v2"><img src="https://img.shields.io/badge/godoc-reference-blue.svg"></a>
|
||||
<a href="https://app.fuzzit.dev/orgs/caddyserver-gh/dashboard"><img src="https://app.fuzzit.dev/badge?org_id=caddyserver-gh"></a>
|
||||
<a href="https://github.com/caddyserver/caddy/actions/workflows/ci.yml"><img src="https://github.com/caddyserver/caddy/actions/workflows/ci.yml/badge.svg"></a>
|
||||
<a href="https://pkg.go.dev/github.com/caddyserver/caddy/v2"><img src="https://img.shields.io/badge/godoc-reference-%23007d9c.svg"></a>
|
||||
<br>
|
||||
<a href="https://twitter.com/caddyserver" title="@caddyserver on Twitter"><img src="https://img.shields.io/badge/twitter-@caddyserver-55acee.svg" alt="@caddyserver on Twitter"></a>
|
||||
<a href="https://caddy.community" title="Caddy Forum"><img src="https://img.shields.io/badge/community-forum-ff69b4.svg" alt="Caddy Forum"></a>
|
||||
<br>
|
||||
<a href="https://sourcegraph.com/github.com/caddyserver/caddy?badge" title="Caddy on Sourcegraph"><img src="https://sourcegraph.com/github.com/caddyserver/caddy/-/badge.svg" alt="Caddy on Sourcegraph"></a>
|
||||
<a href="https://cloudsmith.io/~caddy/repos/"><img src="https://img.shields.io/badge/OSS%20hosting%20by-cloudsmith-blue?logo=cloudsmith" alt="Cloudsmith"></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
<a href="https://github.com/caddyserver/caddy/releases">Download</a> ·
|
||||
<a href="https://github.com/caddyserver/caddy/releases">Releases</a> ·
|
||||
<a href="https://caddyserver.com/docs/">Documentation</a> ·
|
||||
<a href="https://caddy.community">Community</a>
|
||||
<a href="https://caddy.community">Get Help</a>
|
||||
</p>
|
||||
|
||||
|
||||
|
@ -23,10 +33,11 @@
|
|||
### Menu
|
||||
|
||||
- [Features](#features)
|
||||
- [Install](#install)
|
||||
- [Build from source](#build-from-source)
|
||||
- [For development](#for-development)
|
||||
- [With version information and/or plugins](#with-version-information-andor-plugins)
|
||||
- [Getting started](#getting-started)
|
||||
- [Quick start](#quick-start)
|
||||
- [Overview](#overview)
|
||||
- [Full documentation](#full-documentation)
|
||||
- [Getting help](#getting-help)
|
||||
|
@ -35,53 +46,81 @@
|
|||
<p align="center">
|
||||
<b>Powered by</b>
|
||||
<br>
|
||||
<a href="https://github.com/caddyserver/certmagic"><img src="https://user-images.githubusercontent.com/1128849/49704830-49d37200-fbd5-11e8-8385-767e0cd033c3.png" alt="CertMagic" width="250"></a>
|
||||
<a href="https://github.com/caddyserver/certmagic">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://user-images.githubusercontent.com/55066419/206946718-740b6371-3df3-4d72-a822-47e4c48af999.png">
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://user-images.githubusercontent.com/1128849/49704830-49d37200-fbd5-11e8-8385-767e0cd033c3.png">
|
||||
<img src="https://user-images.githubusercontent.com/1128849/49704830-49d37200-fbd5-11e8-8385-767e0cd033c3.png" alt="CertMagic" width="250">
|
||||
</picture>
|
||||
</a>
|
||||
</p>
|
||||
|
||||
|
||||
## Features
|
||||
## [Features](https://caddyserver.com/features)
|
||||
|
||||
- **Easy configuration** with the [Caddyfile](https://caddyserver.com/docs/caddyfile)
|
||||
- **Powerful configuration** with its [native JSON config](https://caddyserver.com/docs/json/)
|
||||
- **Dynamic configuration** with the [JSON API](https://caddyserver.com/docs/api)
|
||||
- [**Config adapters**](https://caddyserver.com/docs/config-adapters) if you don't like JSON
|
||||
- **Automatic HTTPS** by default
|
||||
- [Let's Encrypt](https://letsencrypt.org) for public sites
|
||||
- [ZeroSSL](https://zerossl.com) and [Let's Encrypt](https://letsencrypt.org) for public names
|
||||
- Fully-managed local CA for internal names & IPs
|
||||
- Can coordinate with other Caddy instances in a cluster
|
||||
- Multi-issuer fallback
|
||||
- **Stays up when other servers go down** due to TLS/OCSP/certificate-related issues
|
||||
- **HTTP/1.1, HTTP/2, and experimental HTTP/3** support
|
||||
- **Production-ready** after serving trillions of requests and managing millions of TLS certificates
|
||||
- **Scales to hundreds of thousands of sites** as proven in production
|
||||
- **HTTP/1.1, HTTP/2, and HTTP/3** all supported by default
|
||||
- **Highly extensible** [modular architecture](https://caddyserver.com/docs/architecture) lets Caddy do anything without bloat
|
||||
- **Runs anywhere** with **no external dependencies** (not even libc)
|
||||
- Written in Go, a language with higher **memory safety guarantees** than other servers
|
||||
- Actually **fun to use**
|
||||
- So, so much more to discover
|
||||
- So much more to [discover](https://caddyserver.com/features)
|
||||
|
||||
## Install
|
||||
|
||||
The simplest, cross-platform way to get started is to download Caddy from [GitHub Releases](https://github.com/caddyserver/caddy/releases) and place the executable file in your PATH.
|
||||
|
||||
See [our online documentation](https://caddyserver.com/docs/install) for other install instructions.
|
||||
|
||||
## Build from source
|
||||
|
||||
Requirements:
|
||||
|
||||
- [Go 1.14 or newer](https://golang.org/dl/)
|
||||
- Do NOT disable [Go modules](https://github.com/golang/go/wiki/Modules) (`export GO111MODULE=on`)
|
||||
- [Go 1.22.3 or newer](https://golang.org/dl/)
|
||||
|
||||
### For development
|
||||
|
||||
|
||||
_**Note:** These steps [will not embed proper version information](https://github.com/golang/go/issues/29228). For that, please follow the instructions in the next section._
|
||||
|
||||
```bash
|
||||
$ git clone "https://github.com/caddyserver/caddy.git"
|
||||
$ cd caddy/cmd/caddy/
|
||||
$ go build
|
||||
```
|
||||
|
||||
_**Note:** These steps [will not embed proper version information](https://github.com/golang/go/issues/29228). For that, please follow the instructions below._
|
||||
When you run Caddy, it may try to bind to low ports unless otherwise specified in your config. If your OS requires elevated privileges for this, you will need to give your new binary permission to do so. On Linux, this can be done easily with: `sudo setcap cap_net_bind_service=+ep ./caddy`
|
||||
|
||||
If you prefer to use `go run` which only creates temporary binaries, you can still do this with the included `setcap.sh` like so:
|
||||
|
||||
```bash
|
||||
$ go run -exec ./setcap.sh main.go
|
||||
```
|
||||
|
||||
If you don't want to type your password for `setcap`, use `sudo visudo` to edit your sudoers file and allow your user account to run that command without a password, for example:
|
||||
|
||||
```
|
||||
username ALL=(ALL:ALL) NOPASSWD: /usr/sbin/setcap
|
||||
```
|
||||
|
||||
replacing `username` with your actual username. Please be careful and only do this if you know what you are doing! We are only qualified to document how to use Caddy, not Go tooling or your computer, and we are providing these instructions for convenience only; please learn how to use your own computer at your own risk and make any needful adjustments.
|
||||
|
||||
### With version information and/or plugins
|
||||
|
||||
Using [our builder tool](https://github.com/caddyserver/xcaddy)...
|
||||
Using [our builder tool, `xcaddy`](https://github.com/caddyserver/xcaddy)...
|
||||
|
||||
```
|
||||
$ xcaddy build <caddy_version>
|
||||
$ xcaddy build
|
||||
```
|
||||
|
||||
...the following steps are automated:
|
||||
|
@ -90,8 +129,9 @@ $ xcaddy build <caddy_version>
|
|||
2. Change into it: `cd caddy`
|
||||
3. Copy [Caddy's main.go](https://github.com/caddyserver/caddy/blob/master/cmd/caddy/main.go) into the empty folder. Add imports for any custom plugins you want to add.
|
||||
4. Initialize a Go module: `go mod init caddy`
|
||||
5. Pin Caddy version: `go get github.com/caddyserver/caddy/v2@TAG` replacing `TAG` with a git tag or commit. You can also pin any plugin versions similarly.
|
||||
6. Compile: `go build`
|
||||
5. (Optional) Pin Caddy version: `go get github.com/caddyserver/caddy/v2@version` replacing `version` with a git tag, commit, or branch name.
|
||||
6. (Optional) Add plugins by adding their import: `_ "import/path/here"`
|
||||
7. Compile: `go build -tags=nobadger,nomysql,nopgx`
|
||||
|
||||
|
||||
|
||||
|
@ -100,7 +140,7 @@ $ xcaddy build <caddy_version>
|
|||
|
||||
The [Caddy website](https://caddyserver.com/docs/) has documentation that includes tutorials, quick-start guides, reference, and more.
|
||||
|
||||
**We recommend that all users do our [Getting Started](https://caddyserver.com/docs/getting-started) guide to become familiar with using Caddy.**
|
||||
**We recommend that all users -- regardless of experience level -- do our [Getting Started](https://caddyserver.com/docs/getting-started) guide to become familiar with using Caddy.**
|
||||
|
||||
If you've only got a minute, [the website has several quick-start tutorials](https://caddyserver.com/docs/quick-starts) to choose from! However, after finishing a quick-start tutorial, please read more documentation to understand how the software works. 🙂
|
||||
|
||||
|
@ -119,7 +159,7 @@ The primary way to configure Caddy is through [its API](https://caddyserver.com/
|
|||
|
||||
Caddy exposes an unprecedented level of control compared to any web server in existence. In Caddy, you are usually setting the actual values of the initialized types in memory that power everything from your HTTP handlers and TLS handshakes to your storage medium. Caddy is also ridiculously extensible, with a powerful plugin system that makes vast improvements over other web servers.
|
||||
|
||||
To wield the power of this design, you need to know how the config document is structured. Please see the [our documentation site](https://caddyserver.com/docs/) for details about [Caddy's config structure](https://caddyserver.com/docs/json/).
|
||||
To wield the power of this design, you need to know how the config document is structured. Please see [our documentation site](https://caddyserver.com/docs/) for details about [Caddy's config structure](https://caddyserver.com/docs/json/).
|
||||
|
||||
Nearly all of Caddy's configuration is contained in a single config document, rather than being scattered across CLI flags and env variables and a configuration file as with other web servers. This makes managing your server config more straightforward and reduces hidden variables/factors.
|
||||
|
||||
|
@ -136,7 +176,9 @@ The docs are also open source. You can contribute to them here: https://github.c
|
|||
|
||||
## Getting help
|
||||
|
||||
- We **strongly recommend** that all professionals or companies using Caddy get a support contract through [Ardan Labs](https://www.ardanlabs.com/my/contact-us?dd=caddy) before help is needed.
|
||||
- We advise companies using Caddy to secure a support contract through [Ardan Labs](https://www.ardanlabs.com/my/contact-us?dd=caddy) before help is needed.
|
||||
|
||||
- A [sponsorship](https://github.com/sponsors/mholt) goes a long way! We can offer private help to sponsors. If Caddy is benefitting your company, please consider a sponsorship. This not only helps fund full-time work to ensure the longevity of the project, it provides your company the resources, support, and discounts you need; along with being a great look for your company to your customers and potential customers!
|
||||
|
||||
- Individuals can exchange help for free on our community forum at https://caddy.community. Remember that people give help out of their spare time and good will. The best way to get help is to give it first!
|
||||
|
||||
|
@ -146,7 +188,13 @@ Please use our [issue tracker](https://github.com/caddyserver/caddy/issues) only
|
|||
|
||||
## About
|
||||
|
||||
**The name "Caddy" is trademarked.** The name of the software is "Caddy", not "Caddy Server" or "CaddyServer". Please call it "Caddy" or, if you wish to clarify, "the Caddy web server". Caddy is a registered trademark of Light Code Labs, LLC.
|
||||
Matthew Holt began developing Caddy in 2014 while studying computer science at Brigham Young University. (The name "Caddy" was chosen because this software helps with the tedious, mundane tasks of serving the Web, and is also a single place for multiple things to be organized together.) It soon became the first web server to use HTTPS automatically and by default, and now has hundreds of contributors and has served trillions of HTTPS requests.
|
||||
|
||||
**The name "Caddy" is trademarked.** The name of the software is "Caddy", not "Caddy Server" or "CaddyServer". Please call it "Caddy" or, if you wish to clarify, "the Caddy web server". Caddy is a registered trademark of Stack Holdings GmbH.
|
||||
|
||||
- _Project on Twitter: [@caddyserver](https://twitter.com/caddyserver)_
|
||||
- _Author on Twitter: [@mholt6](https://twitter.com/mholt6)_
|
||||
|
||||
Caddy is a project of [ZeroSSL](https://zerossl.com), a Stack Holdings company.
|
||||
|
||||
Debian package repository hosting is graciously provided by [Cloudsmith](https://cloudsmith.com). Cloudsmith is the only fully hosted, cloud-native, universal package management solution, that enables your organization to create, store and share packages in any format, to any place, with total confidence.
|
||||
|
|
115
admin_test.go
115
admin_test.go
|
@ -16,10 +16,31 @@ package caddy
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"sync"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var testCfg = []byte(`{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"myserver": {
|
||||
"listen": ["tcp/localhost:8080-8084"],
|
||||
"read_timeout": "30s"
|
||||
},
|
||||
"yourserver": {
|
||||
"listen": ["127.0.0.1:5000"],
|
||||
"read_header_timeout": "15s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
`)
|
||||
|
||||
func TestUnsyncedConfigAccess(t *testing.T) {
|
||||
// each test is performed in sequence, so
|
||||
// each change builds on the previous ones;
|
||||
|
@ -54,6 +75,12 @@ func TestUnsyncedConfigAccess(t *testing.T) {
|
|||
path: "/bar/qq",
|
||||
expect: `{"foo": "jet", "bar": {"aa": "bb"}, "list": ["a", "b", "c"]}`,
|
||||
},
|
||||
{
|
||||
method: "DELETE",
|
||||
path: "/bar/qq",
|
||||
expect: `{"foo": "jet", "bar": {"aa": "bb"}, "list": ["a", "b", "c"]}`,
|
||||
shouldErr: true,
|
||||
},
|
||||
{
|
||||
method: "POST",
|
||||
path: "/list",
|
||||
|
@ -94,7 +121,7 @@ func TestUnsyncedConfigAccess(t *testing.T) {
|
|||
}
|
||||
|
||||
// decode the expected config so we can do a convenient DeepEqual
|
||||
var expectedDecoded interface{}
|
||||
var expectedDecoded any
|
||||
err = json.Unmarshal([]byte(tc.expect), &expectedDecoded)
|
||||
if err != nil {
|
||||
t.Fatalf("Test %d: Unmarshaling expected config: %v", i, err)
|
||||
|
@ -108,25 +135,71 @@ func TestUnsyncedConfigAccess(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func BenchmarkLoad(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
cfg := []byte(`{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"myserver": {
|
||||
"listen": ["tcp/localhost:8080-8084"],
|
||||
"read_timeout": "30s"
|
||||
},
|
||||
"yourserver": {
|
||||
"listen": ["127.0.0.1:5000"],
|
||||
"read_header_timeout": "15s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
`)
|
||||
Load(cfg, true)
|
||||
// TestLoadConcurrent exercises Load under concurrent conditions
|
||||
// and is most useful under test with `-race` enabled.
|
||||
func TestLoadConcurrent(t *testing.T) {
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for i := 0; i < 100; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
_ = Load(testCfg, true)
|
||||
wg.Done()
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
type fooModule struct {
|
||||
IntField int
|
||||
StrField string
|
||||
}
|
||||
|
||||
func (fooModule) CaddyModule() ModuleInfo {
|
||||
return ModuleInfo{
|
||||
ID: "foo",
|
||||
New: func() Module { return new(fooModule) },
|
||||
}
|
||||
}
|
||||
func (fooModule) Start() error { return nil }
|
||||
func (fooModule) Stop() error { return nil }
|
||||
|
||||
func TestETags(t *testing.T) {
|
||||
RegisterModule(fooModule{})
|
||||
|
||||
if err := Load([]byte(`{"admin": {"listen": "localhost:2999"}, "apps": {"foo": {"strField": "abc", "intField": 0}}}`), true); err != nil {
|
||||
t.Fatalf("loading: %s", err)
|
||||
}
|
||||
|
||||
const key = "/" + rawConfigKey + "/apps/foo"
|
||||
|
||||
// try update the config with the wrong etag
|
||||
err := changeConfig(http.MethodPost, key, []byte(`{"strField": "abc", "intField": 1}}`), fmt.Sprintf(`"/%s not_an_etag"`, rawConfigKey), false)
|
||||
if apiErr, ok := err.(APIError); !ok || apiErr.HTTPStatus != http.StatusPreconditionFailed {
|
||||
t.Fatalf("expected precondition failed; got %v", err)
|
||||
}
|
||||
|
||||
// get the etag
|
||||
hash := etagHasher()
|
||||
if err := readConfig(key, hash); err != nil {
|
||||
t.Fatalf("reading: %s", err)
|
||||
}
|
||||
|
||||
// do the same update with the correct key
|
||||
err = changeConfig(http.MethodPost, key, []byte(`{"strField": "abc", "intField": 1}`), makeEtag(key, hash), false)
|
||||
if err != nil {
|
||||
t.Fatalf("expected update to work; got %v", err)
|
||||
}
|
||||
|
||||
// now try another update. The hash should no longer match and we should get precondition failed
|
||||
err = changeConfig(http.MethodPost, key, []byte(`{"strField": "abc", "intField": 2}`), makeEtag(key, hash), false)
|
||||
if apiErr, ok := err.(APIError); !ok || apiErr.HTTPStatus != http.StatusPreconditionFailed {
|
||||
t.Fatalf("expected precondition failed; got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLoad(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
Load(testCfg, true)
|
||||
}
|
||||
}
|
||||
|
|
74
caddy_test.go
Normal file
74
caddy_test.go
Normal file
|
@ -0,0 +1,74 @@
|
|||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package caddy
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestParseDuration(t *testing.T) {
|
||||
const day = 24 * time.Hour
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
expect time.Duration
|
||||
}{
|
||||
{
|
||||
input: "3h",
|
||||
expect: 3 * time.Hour,
|
||||
},
|
||||
{
|
||||
input: "1d",
|
||||
expect: day,
|
||||
},
|
||||
{
|
||||
input: "1d30m",
|
||||
expect: day + 30*time.Minute,
|
||||
},
|
||||
{
|
||||
input: "1m2d",
|
||||
expect: time.Minute + day*2,
|
||||
},
|
||||
{
|
||||
input: "1m2d30s",
|
||||
expect: time.Minute + day*2 + 30*time.Second,
|
||||
},
|
||||
{
|
||||
input: "1d2d",
|
||||
expect: 3 * day,
|
||||
},
|
||||
{
|
||||
input: "1.5d",
|
||||
expect: time.Duration(1.5 * float64(day)),
|
||||
},
|
||||
{
|
||||
input: "4m1.25d",
|
||||
expect: 4*time.Minute + time.Duration(1.25*float64(day)),
|
||||
},
|
||||
{
|
||||
input: "-1.25d12h",
|
||||
expect: time.Duration(-1.25*float64(day)) - 12*time.Hour,
|
||||
},
|
||||
} {
|
||||
actual, err := ParseDuration(tc.input)
|
||||
if err != nil {
|
||||
t.Errorf("Test %d ('%s'): Got error: %v", i, tc.input, err)
|
||||
continue
|
||||
}
|
||||
if actual != tc.expect {
|
||||
t.Errorf("Test %d ('%s'): Expected=%s Actual=%s", i, tc.input, tc.expect, actual)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -15,6 +15,7 @@
|
|||
package caddyfile
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
|
@ -28,12 +29,12 @@ type Adapter struct {
|
|||
}
|
||||
|
||||
// Adapt converts the Caddyfile config in body to Caddy JSON.
|
||||
func (a Adapter) Adapt(body []byte, options map[string]interface{}) ([]byte, []caddyconfig.Warning, error) {
|
||||
func (a Adapter) Adapt(body []byte, options map[string]any) ([]byte, []caddyconfig.Warning, error) {
|
||||
if a.ServerType == nil {
|
||||
return nil, nil, fmt.Errorf("no server type")
|
||||
}
|
||||
if options == nil {
|
||||
options = make(map[string]interface{})
|
||||
options = make(map[string]any)
|
||||
}
|
||||
|
||||
filename, _ := options["filename"].(string)
|
||||
|
@ -51,40 +52,93 @@ func (a Adapter) Adapt(body []byte, options map[string]interface{}) ([]byte, []c
|
|||
return nil, warnings, err
|
||||
}
|
||||
|
||||
marshalFunc := json.Marshal
|
||||
if options["pretty"] == "true" {
|
||||
marshalFunc = caddyconfig.JSONIndent
|
||||
// lint check: see if input was properly formatted; sometimes messy files parse
|
||||
// successfully but result in logical errors (the Caddyfile is a bad format, I'm sorry)
|
||||
if warning, different := FormattingDifference(filename, body); different {
|
||||
warnings = append(warnings, warning)
|
||||
}
|
||||
result, err := marshalFunc(cfg)
|
||||
|
||||
result, err := json.Marshal(cfg)
|
||||
|
||||
return result, warnings, err
|
||||
}
|
||||
|
||||
// Unmarshaler is a type that can unmarshal
|
||||
// Caddyfile tokens to set itself up for a
|
||||
// JSON encoding. The goal of an unmarshaler
|
||||
// is not to set itself up for actual use,
|
||||
// but to set itself up for being marshaled
|
||||
// into JSON. Caddyfile-unmarshaled values
|
||||
// will not be used directly; they will be
|
||||
// encoded as JSON and then used from that.
|
||||
// Implementations must be able to support
|
||||
// multiple segments (instances of their
|
||||
// directive or batch of tokens); typically
|
||||
// this means wrapping all token logic in
|
||||
// a loop: `for d.Next() { ... }`.
|
||||
// FormattingDifference returns a warning and true if the formatted version
|
||||
// is any different from the input; empty warning and false otherwise.
|
||||
// TODO: also perform this check on imported files
|
||||
func FormattingDifference(filename string, body []byte) (caddyconfig.Warning, bool) {
|
||||
// replace windows-style newlines to normalize comparison
|
||||
normalizedBody := bytes.Replace(body, []byte("\r\n"), []byte("\n"), -1)
|
||||
|
||||
formatted := Format(normalizedBody)
|
||||
if bytes.Equal(formatted, normalizedBody) {
|
||||
return caddyconfig.Warning{}, false
|
||||
}
|
||||
|
||||
// find where the difference is
|
||||
line := 1
|
||||
for i, ch := range normalizedBody {
|
||||
if i >= len(formatted) || ch != formatted[i] {
|
||||
break
|
||||
}
|
||||
if ch == '\n' {
|
||||
line++
|
||||
}
|
||||
}
|
||||
return caddyconfig.Warning{
|
||||
File: filename,
|
||||
Line: line,
|
||||
Message: "Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies",
|
||||
}, true
|
||||
}
|
||||
|
||||
// Unmarshaler is a type that can unmarshal Caddyfile tokens to
|
||||
// set itself up for a JSON encoding. The goal of an unmarshaler
|
||||
// is not to set itself up for actual use, but to set itself up for
|
||||
// being marshaled into JSON. Caddyfile-unmarshaled values will not
|
||||
// be used directly; they will be encoded as JSON and then used from
|
||||
// that. Implementations _may_ be able to support multiple segments
|
||||
// (instances of their directive or batch of tokens); typically this
|
||||
// means wrapping parsing logic in a loop: `for d.Next() { ... }`.
|
||||
// More commonly, only a single segment is supported, so a simple
|
||||
// `d.Next()` at the start should be used to consume the module
|
||||
// identifier token (directive name, etc).
|
||||
type Unmarshaler interface {
|
||||
UnmarshalCaddyfile(d *Dispenser) error
|
||||
}
|
||||
|
||||
// ServerType is a type that can evaluate a Caddyfile and set up a caddy config.
|
||||
type ServerType interface {
|
||||
// Setup takes the server blocks which
|
||||
// contain tokens, as well as options
|
||||
// (e.g. CLI flags) and creates a Caddy
|
||||
// config, along with any warnings or
|
||||
// an error.
|
||||
Setup([]ServerBlock, map[string]interface{}) (*caddy.Config, []caddyconfig.Warning, error)
|
||||
// Setup takes the server blocks which contain tokens,
|
||||
// as well as options (e.g. CLI flags) and creates a
|
||||
// Caddy config, along with any warnings or an error.
|
||||
Setup([]ServerBlock, map[string]any) (*caddy.Config, []caddyconfig.Warning, error)
|
||||
}
|
||||
|
||||
// UnmarshalModule instantiates a module with the given ID and invokes
|
||||
// UnmarshalCaddyfile on the new value using the immediate next segment
|
||||
// of d as input. In other words, d's next token should be the first
|
||||
// token of the module's Caddyfile input.
|
||||
//
|
||||
// This function is used when the next segment of Caddyfile tokens
|
||||
// belongs to another Caddy module. The returned value is often
|
||||
// type-asserted to the module's associated type for practical use
|
||||
// when setting up a config.
|
||||
func UnmarshalModule(d *Dispenser, moduleID string) (Unmarshaler, error) {
|
||||
mod, err := caddy.GetModule(moduleID)
|
||||
if err != nil {
|
||||
return nil, d.Errf("getting module named '%s': %v", moduleID, err)
|
||||
}
|
||||
inst := mod.New()
|
||||
unm, ok := inst.(Unmarshaler)
|
||||
if !ok {
|
||||
return nil, d.Errf("module %s is not a Caddyfile unmarshaler; is %T", mod.ID, inst)
|
||||
}
|
||||
err = unm.UnmarshalCaddyfile(d.NewFromNextSegment())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return unm, nil
|
||||
}
|
||||
|
||||
// Interface guard
|
||||
|
|
195
caddyconfig/caddyfile/dispenser.go
Executable file → Normal file
195
caddyconfig/caddyfile/dispenser.go
Executable file → Normal file
|
@ -17,6 +17,9 @@ package caddyfile
|
|||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
|
@ -27,6 +30,10 @@ type Dispenser struct {
|
|||
tokens []Token
|
||||
cursor int
|
||||
nesting int
|
||||
|
||||
// A map of arbitrary context data that can be used
|
||||
// to pass through some information to unmarshalers.
|
||||
context map[string]any
|
||||
}
|
||||
|
||||
// NewDispenser returns a Dispenser filled with the given tokens.
|
||||
|
@ -37,6 +44,16 @@ func NewDispenser(tokens []Token) *Dispenser {
|
|||
}
|
||||
}
|
||||
|
||||
// NewTestDispenser parses input into tokens and creates a new
|
||||
// Dispenser for test purposes only; any errors are fatal.
|
||||
func NewTestDispenser(input string) *Dispenser {
|
||||
tokens, err := allTokens("Testfile", []byte(input))
|
||||
if err != nil && err != io.EOF {
|
||||
log.Fatalf("getting all tokens from input: %v", err)
|
||||
}
|
||||
return NewDispenser(tokens)
|
||||
}
|
||||
|
||||
// Next loads the next token. Returns true if a token
|
||||
// was loaded; false otherwise. If false, all tokens
|
||||
// have been consumed.
|
||||
|
@ -88,12 +105,12 @@ func (d *Dispenser) nextOnSameLine() bool {
|
|||
d.cursor++
|
||||
return true
|
||||
}
|
||||
if d.cursor >= len(d.tokens) {
|
||||
if d.cursor >= len(d.tokens)-1 {
|
||||
return false
|
||||
}
|
||||
if d.cursor < len(d.tokens)-1 &&
|
||||
d.tokens[d.cursor].File == d.tokens[d.cursor+1].File &&
|
||||
d.tokens[d.cursor].Line+d.numLineBreaks(d.cursor) == d.tokens[d.cursor+1].Line {
|
||||
curr := d.tokens[d.cursor]
|
||||
next := d.tokens[d.cursor+1]
|
||||
if !isNextOnNewLine(curr, next) {
|
||||
d.cursor++
|
||||
return true
|
||||
}
|
||||
|
@ -109,12 +126,12 @@ func (d *Dispenser) NextLine() bool {
|
|||
d.cursor++
|
||||
return true
|
||||
}
|
||||
if d.cursor >= len(d.tokens) {
|
||||
if d.cursor >= len(d.tokens)-1 {
|
||||
return false
|
||||
}
|
||||
if d.cursor < len(d.tokens)-1 &&
|
||||
(d.tokens[d.cursor].File != d.tokens[d.cursor+1].File ||
|
||||
d.tokens[d.cursor].Line+d.numLineBreaks(d.cursor) < d.tokens[d.cursor+1].Line) {
|
||||
curr := d.tokens[d.cursor]
|
||||
next := d.tokens[d.cursor+1]
|
||||
if isNextOnNewLine(curr, next) {
|
||||
d.cursor++
|
||||
return true
|
||||
}
|
||||
|
@ -133,15 +150,15 @@ func (d *Dispenser) NextLine() bool {
|
|||
//
|
||||
// Proper use of this method looks like this:
|
||||
//
|
||||
// for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
// }
|
||||
// for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
// }
|
||||
//
|
||||
// However, in simple cases where it is known that the
|
||||
// Dispenser is new and has not already traversed state
|
||||
// by a loop over NextBlock(), this will do:
|
||||
//
|
||||
// for d.NextBlock(0) {
|
||||
// }
|
||||
// for d.NextBlock(0) {
|
||||
// }
|
||||
//
|
||||
// As with other token parsing logic, a loop over
|
||||
// NextBlock() should be contained within a loop over
|
||||
|
@ -189,6 +206,46 @@ func (d *Dispenser) Val() string {
|
|||
return d.tokens[d.cursor].Text
|
||||
}
|
||||
|
||||
// ValRaw gets the raw text of the current token (including quotes).
|
||||
// If the token was a heredoc, then the delimiter is not included,
|
||||
// because that is not relevant to any unmarshaling logic at this time.
|
||||
// If there is no token loaded, it returns empty string.
|
||||
func (d *Dispenser) ValRaw() string {
|
||||
if d.cursor < 0 || d.cursor >= len(d.tokens) {
|
||||
return ""
|
||||
}
|
||||
quote := d.tokens[d.cursor].wasQuoted
|
||||
if quote > 0 && quote != '<' {
|
||||
// string literal
|
||||
return string(quote) + d.tokens[d.cursor].Text + string(quote)
|
||||
}
|
||||
return d.tokens[d.cursor].Text
|
||||
}
|
||||
|
||||
// ScalarVal gets value of the current token, converted to the closest
|
||||
// scalar type. If there is no token loaded, it returns nil.
|
||||
func (d *Dispenser) ScalarVal() any {
|
||||
if d.cursor < 0 || d.cursor >= len(d.tokens) {
|
||||
return nil
|
||||
}
|
||||
quote := d.tokens[d.cursor].wasQuoted
|
||||
text := d.tokens[d.cursor].Text
|
||||
|
||||
if quote > 0 {
|
||||
return text // string literal
|
||||
}
|
||||
if num, err := strconv.Atoi(text); err == nil {
|
||||
return num
|
||||
}
|
||||
if num, err := strconv.ParseFloat(text, 64); err == nil {
|
||||
return num
|
||||
}
|
||||
if bool, err := strconv.ParseBool(text); err == nil {
|
||||
return bool
|
||||
}
|
||||
return text
|
||||
}
|
||||
|
||||
// Line gets the line number of the current token.
|
||||
// If there is no token loaded, it returns 0.
|
||||
func (d *Dispenser) Line() int {
|
||||
|
@ -237,6 +294,19 @@ func (d *Dispenser) AllArgs(targets ...*string) bool {
|
|||
return true
|
||||
}
|
||||
|
||||
// CountRemainingArgs counts the amount of remaining arguments
|
||||
// (tokens on the same line) without consuming the tokens.
|
||||
func (d *Dispenser) CountRemainingArgs() int {
|
||||
count := 0
|
||||
for d.NextArg() {
|
||||
count++
|
||||
}
|
||||
for i := 0; i < count; i++ {
|
||||
d.Prev()
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
// RemainingArgs loads any more arguments (tokens on the same line)
|
||||
// into a slice and returns them. Open curly brace tokens also indicate
|
||||
// the end of arguments, and the curly brace is not included in
|
||||
|
@ -249,6 +319,18 @@ func (d *Dispenser) RemainingArgs() []string {
|
|||
return args
|
||||
}
|
||||
|
||||
// RemainingArgsRaw loads any more arguments (tokens on the same line,
|
||||
// retaining quotes) into a slice and returns them. Open curly brace
|
||||
// tokens also indicate the end of arguments, and the curly brace is
|
||||
// not included in the return value nor is it loaded.
|
||||
func (d *Dispenser) RemainingArgsRaw() []string {
|
||||
var args []string
|
||||
for d.NextArg() {
|
||||
args = append(args, d.ValRaw())
|
||||
}
|
||||
return args
|
||||
}
|
||||
|
||||
// NewFromNextSegment returns a new dispenser with a copy of
|
||||
// the tokens from the current token until the end of the
|
||||
// "directive" whether that be to the end of the line or
|
||||
|
@ -313,33 +395,40 @@ func (d *Dispenser) Reset() {
|
|||
// an argument.
|
||||
func (d *Dispenser) ArgErr() error {
|
||||
if d.Val() == "{" {
|
||||
return d.Err("Unexpected token '{', expecting argument")
|
||||
return d.Err("unexpected token '{', expecting argument")
|
||||
}
|
||||
return d.Errf("Wrong argument count or unexpected line ending after '%s'", d.Val())
|
||||
return d.Errf("wrong argument count or unexpected line ending after '%s'", d.Val())
|
||||
}
|
||||
|
||||
// SyntaxErr creates a generic syntax error which explains what was
|
||||
// found and what was expected.
|
||||
func (d *Dispenser) SyntaxErr(expected string) error {
|
||||
msg := fmt.Sprintf("%s:%d - Syntax error: Unexpected token '%s', expecting '%s'", d.File(), d.Line(), d.Val(), expected)
|
||||
msg := fmt.Sprintf("syntax error: unexpected token '%s', expecting '%s', at %s:%d import chain: ['%s']", d.Val(), expected, d.File(), d.Line(), strings.Join(d.Token().imports, "','"))
|
||||
return errors.New(msg)
|
||||
}
|
||||
|
||||
// EOFErr returns an error indicating that the dispenser reached
|
||||
// the end of the input when searching for the next token.
|
||||
func (d *Dispenser) EOFErr() error {
|
||||
return d.Errf("Unexpected EOF")
|
||||
return d.Errf("unexpected EOF")
|
||||
}
|
||||
|
||||
// Err generates a custom parse-time error with a message of msg.
|
||||
func (d *Dispenser) Err(msg string) error {
|
||||
msg = fmt.Sprintf("%s:%d - Error during parsing: %s", d.File(), d.Line(), msg)
|
||||
return errors.New(msg)
|
||||
return d.WrapErr(errors.New(msg))
|
||||
}
|
||||
|
||||
// Errf is like Err, but for formatted error messages
|
||||
func (d *Dispenser) Errf(format string, args ...interface{}) error {
|
||||
return d.Err(fmt.Sprintf(format, args...))
|
||||
func (d *Dispenser) Errf(format string, args ...any) error {
|
||||
return d.WrapErr(fmt.Errorf(format, args...))
|
||||
}
|
||||
|
||||
// WrapErr takes an existing error and adds the Caddyfile file and line number.
|
||||
func (d *Dispenser) WrapErr(err error) error {
|
||||
if len(d.Token().imports) > 0 {
|
||||
return fmt.Errorf("%w, at %s:%d import chain ['%s']", err, d.File(), d.Line(), strings.Join(d.Token().imports, "','"))
|
||||
}
|
||||
return fmt.Errorf("%w, at %s:%d", err, d.File(), d.Line())
|
||||
}
|
||||
|
||||
// Delete deletes the current token and returns the updated slice
|
||||
|
@ -359,14 +448,42 @@ func (d *Dispenser) Delete() []Token {
|
|||
return d.tokens
|
||||
}
|
||||
|
||||
// numLineBreaks counts how many line breaks are in the token
|
||||
// value given by the token index tknIdx. It returns 0 if the
|
||||
// token does not exist or there are no line breaks.
|
||||
func (d *Dispenser) numLineBreaks(tknIdx int) int {
|
||||
if tknIdx < 0 || tknIdx >= len(d.tokens) {
|
||||
return 0
|
||||
// DeleteN is the same as Delete, but can delete many tokens at once.
|
||||
// If there aren't N tokens available to delete, none are deleted.
|
||||
func (d *Dispenser) DeleteN(amount int) []Token {
|
||||
if amount > 0 && d.cursor >= (amount-1) && d.cursor <= len(d.tokens)-1 {
|
||||
d.tokens = append(d.tokens[:d.cursor-(amount-1)], d.tokens[d.cursor+1:]...)
|
||||
d.cursor -= amount
|
||||
}
|
||||
return strings.Count(d.tokens[tknIdx].Text, "\n")
|
||||
return d.tokens
|
||||
}
|
||||
|
||||
// SetContext sets a key-value pair in the context map.
|
||||
func (d *Dispenser) SetContext(key string, value any) {
|
||||
if d.context == nil {
|
||||
d.context = make(map[string]any)
|
||||
}
|
||||
d.context[key] = value
|
||||
}
|
||||
|
||||
// GetContext gets the value of a key in the context map.
|
||||
func (d *Dispenser) GetContext(key string) any {
|
||||
if d.context == nil {
|
||||
return nil
|
||||
}
|
||||
return d.context[key]
|
||||
}
|
||||
|
||||
// GetContextString gets the value of a key in the context map
|
||||
// as a string, or an empty string if the key does not exist.
|
||||
func (d *Dispenser) GetContextString(key string) string {
|
||||
if d.context == nil {
|
||||
return ""
|
||||
}
|
||||
if val, ok := d.context[key].(string); ok {
|
||||
return val
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// isNewLine determines whether the current token is on a different
|
||||
|
@ -379,6 +496,26 @@ func (d *Dispenser) isNewLine() bool {
|
|||
if d.cursor > len(d.tokens)-1 {
|
||||
return false
|
||||
}
|
||||
return d.tokens[d.cursor-1].File != d.tokens[d.cursor].File ||
|
||||
d.tokens[d.cursor-1].Line+d.numLineBreaks(d.cursor-1) < d.tokens[d.cursor].Line
|
||||
|
||||
prev := d.tokens[d.cursor-1]
|
||||
curr := d.tokens[d.cursor]
|
||||
return isNextOnNewLine(prev, curr)
|
||||
}
|
||||
|
||||
// isNextOnNewLine determines whether the current token is on a different
|
||||
// line (higher line number) than the next token. It handles imported
|
||||
// tokens correctly. If there isn't a next token, it returns true.
|
||||
func (d *Dispenser) isNextOnNewLine() bool {
|
||||
if d.cursor < 0 {
|
||||
return false
|
||||
}
|
||||
if d.cursor >= len(d.tokens)-1 {
|
||||
return true
|
||||
}
|
||||
|
||||
curr := d.tokens[d.cursor]
|
||||
next := d.tokens[d.cursor+1]
|
||||
return isNextOnNewLine(curr, next)
|
||||
}
|
||||
|
||||
const MatcherNameCtxKey = "matcher_name"
|
||||
|
|
15
caddyconfig/caddyfile/dispenser_test.go
Executable file → Normal file
15
caddyconfig/caddyfile/dispenser_test.go
Executable file → Normal file
|
@ -15,8 +15,7 @@
|
|||
package caddyfile
|
||||
|
||||
import (
|
||||
"io"
|
||||
"log"
|
||||
"errors"
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
|
@ -305,14 +304,10 @@ func TestDispenser_ArgErr_Err(t *testing.T) {
|
|||
if !strings.Contains(err.Error(), "foobar") {
|
||||
t.Errorf("Expected error message with custom message in it ('foobar'); got '%v'", err)
|
||||
}
|
||||
}
|
||||
|
||||
// NewTestDispenser parses input into tokens and creates a new
|
||||
// Disenser for test purposes only; any errors are fatal.
|
||||
func NewTestDispenser(input string) *Dispenser {
|
||||
tokens, err := allTokens("Testfile", []byte(input))
|
||||
if err != nil && err != io.EOF {
|
||||
log.Fatalf("getting all tokens from input: %v", err)
|
||||
ErrBarIsFull := errors.New("bar is full")
|
||||
bookingError := d.Errf("unable to reserve: %w", ErrBarIsFull)
|
||||
if !errors.Is(bookingError, ErrBarIsFull) {
|
||||
t.Errorf("Errf(): should be able to unwrap the error chain")
|
||||
}
|
||||
return NewDispenser(tokens)
|
||||
}
|
||||
|
|
|
@ -17,6 +17,7 @@ package caddyfile
|
|||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"slices"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
|
@ -31,6 +32,14 @@ func Format(input []byte) []byte {
|
|||
out := new(bytes.Buffer)
|
||||
rdr := bytes.NewReader(input)
|
||||
|
||||
type heredocState int
|
||||
|
||||
const (
|
||||
heredocClosed heredocState = 0
|
||||
heredocOpening heredocState = 1
|
||||
heredocOpened heredocState = 2
|
||||
)
|
||||
|
||||
var (
|
||||
last rune // the last character that was written to the result
|
||||
|
||||
|
@ -47,6 +56,11 @@ func Format(input []byte) []byte {
|
|||
quoted bool // whether we're in a quoted segment
|
||||
escaped bool // whether current char is escaped
|
||||
|
||||
heredoc heredocState // whether we're in a heredoc
|
||||
heredocEscaped bool // whether heredoc is escaped
|
||||
heredocMarker []rune
|
||||
heredocClosingMarker []rune
|
||||
|
||||
nesting int // indentation level
|
||||
)
|
||||
|
||||
|
@ -75,9 +89,68 @@ func Format(input []byte) []byte {
|
|||
panic(err)
|
||||
}
|
||||
|
||||
// detect whether we have the start of a heredoc
|
||||
if !quoted && !(heredoc != heredocClosed || heredocEscaped) &&
|
||||
space && last == '<' && ch == '<' {
|
||||
write(ch)
|
||||
heredoc = heredocOpening
|
||||
space = false
|
||||
continue
|
||||
}
|
||||
|
||||
if heredoc == heredocOpening {
|
||||
if ch == '\n' {
|
||||
if len(heredocMarker) > 0 && heredocMarkerRegexp.MatchString(string(heredocMarker)) {
|
||||
heredoc = heredocOpened
|
||||
} else {
|
||||
heredocMarker = nil
|
||||
heredoc = heredocClosed
|
||||
nextLine()
|
||||
continue
|
||||
}
|
||||
write(ch)
|
||||
continue
|
||||
}
|
||||
if unicode.IsSpace(ch) {
|
||||
// a space means it's just a regular token and not a heredoc
|
||||
heredocMarker = nil
|
||||
heredoc = heredocClosed
|
||||
} else {
|
||||
heredocMarker = append(heredocMarker, ch)
|
||||
write(ch)
|
||||
continue
|
||||
}
|
||||
}
|
||||
// if we're in a heredoc, all characters are read&write as-is
|
||||
if heredoc == heredocOpened {
|
||||
heredocClosingMarker = append(heredocClosingMarker, ch)
|
||||
if len(heredocClosingMarker) > len(heredocMarker)+1 { // We assert that the heredocClosingMarker is followed by a unicode.Space
|
||||
heredocClosingMarker = heredocClosingMarker[1:]
|
||||
}
|
||||
// check if we're done
|
||||
if unicode.IsSpace(ch) && slices.Equal(heredocClosingMarker[:len(heredocClosingMarker)-1], heredocMarker) {
|
||||
heredocMarker = nil
|
||||
heredocClosingMarker = nil
|
||||
heredoc = heredocClosed
|
||||
} else {
|
||||
write(ch)
|
||||
if ch == '\n' {
|
||||
heredocClosingMarker = heredocClosingMarker[:0]
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if last == '<' && space {
|
||||
space = false
|
||||
}
|
||||
|
||||
if comment {
|
||||
if ch == '\n' {
|
||||
comment = false
|
||||
space = true
|
||||
nextLine()
|
||||
continue
|
||||
} else {
|
||||
write(ch)
|
||||
continue
|
||||
|
@ -95,6 +168,9 @@ func Format(input []byte) []byte {
|
|||
}
|
||||
|
||||
if escaped {
|
||||
if ch == '<' {
|
||||
heredocEscaped = true
|
||||
}
|
||||
write(ch)
|
||||
escaped = false
|
||||
continue
|
||||
|
@ -114,6 +190,7 @@ func Format(input []byte) []byte {
|
|||
|
||||
if unicode.IsSpace(ch) {
|
||||
space = true
|
||||
heredocEscaped = false
|
||||
if ch == '\n' {
|
||||
newLines++
|
||||
}
|
||||
|
@ -131,9 +208,6 @@ func Format(input []byte) []byte {
|
|||
//////////////////////////////////////////////////////////
|
||||
|
||||
if ch == '#' {
|
||||
if !spacePrior && !beginningOfLine {
|
||||
write(' ')
|
||||
}
|
||||
comment = true
|
||||
}
|
||||
|
||||
|
@ -153,7 +227,10 @@ func Format(input []byte) []byte {
|
|||
openBraceWritten = true
|
||||
nextLine()
|
||||
newLines = 0
|
||||
nesting++
|
||||
// prevent infinite nesting from ridiculous inputs (issue #4169)
|
||||
if nesting < 10 {
|
||||
nesting++
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
|
@ -202,6 +279,11 @@ func Format(input []byte) []byte {
|
|||
write('{')
|
||||
openBraceWritten = true
|
||||
}
|
||||
|
||||
if spacePrior && ch == '<' {
|
||||
space = true
|
||||
}
|
||||
|
||||
write(ch)
|
||||
|
||||
beginningOfLine = false
|
||||
|
|
27
caddyconfig/caddyfile/formatter_fuzz.go
Normal file
27
caddyconfig/caddyfile/formatter_fuzz.go
Normal file
|
@ -0,0 +1,27 @@
|
|||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//go:build gofuzz
|
||||
|
||||
package caddyfile
|
||||
|
||||
import "bytes"
|
||||
|
||||
func FuzzFormat(input []byte) int {
|
||||
formatted := Format(input)
|
||||
if bytes.Equal(formatted, Format(formatted)) {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
|
@ -179,6 +179,11 @@ d {
|
|||
{$F}
|
||||
}`,
|
||||
},
|
||||
{
|
||||
description: "env var placeholders with port",
|
||||
input: `:{$PORT}`,
|
||||
expect: `:{$PORT}`,
|
||||
},
|
||||
{
|
||||
description: "comments",
|
||||
input: `#a "\n"
|
||||
|
@ -201,7 +206,7 @@ c
|
|||
}
|
||||
|
||||
d {
|
||||
e #f
|
||||
e#f
|
||||
# g
|
||||
}
|
||||
|
||||
|
@ -229,7 +234,7 @@ bar"
|
|||
j {
|
||||
"\"k\" l m"
|
||||
}`,
|
||||
expect: `"a \"b\" " #c
|
||||
expect: `"a \"b\" "#c
|
||||
d
|
||||
|
||||
e {
|
||||
|
@ -305,6 +310,130 @@ bar "{\"key\":34}"`,
|
|||
|
||||
baz`,
|
||||
},
|
||||
{
|
||||
description: "hash within string is not a comment",
|
||||
input: `redir / /some/#/path`,
|
||||
expect: `redir / /some/#/path`,
|
||||
},
|
||||
{
|
||||
description: "brace does not fold into comment above",
|
||||
input: `# comment
|
||||
{
|
||||
foo
|
||||
}`,
|
||||
expect: `# comment
|
||||
{
|
||||
foo
|
||||
}`,
|
||||
},
|
||||
{
|
||||
description: "matthewpi/vscode-caddyfile-support#13",
|
||||
input: `{
|
||||
email {$ACMEEMAIL}
|
||||
#debug
|
||||
}
|
||||
|
||||
block {
|
||||
}
|
||||
`,
|
||||
expect: `{
|
||||
email {$ACMEEMAIL}
|
||||
#debug
|
||||
}
|
||||
|
||||
block {
|
||||
}
|
||||
`,
|
||||
},
|
||||
{
|
||||
description: "matthewpi/vscode-caddyfile-support#13 - bad formatting",
|
||||
input: `{
|
||||
email {$ACMEEMAIL}
|
||||
#debug
|
||||
}
|
||||
|
||||
block {
|
||||
}
|
||||
`,
|
||||
expect: `{
|
||||
email {$ACMEEMAIL}
|
||||
#debug
|
||||
}
|
||||
|
||||
block {
|
||||
}
|
||||
`,
|
||||
},
|
||||
{
|
||||
description: "keep heredoc as-is",
|
||||
input: `block {
|
||||
heredoc <<HEREDOC
|
||||
Here's more than one space Here's more than one space
|
||||
HEREDOC
|
||||
}
|
||||
`,
|
||||
expect: `block {
|
||||
heredoc <<HEREDOC
|
||||
Here's more than one space Here's more than one space
|
||||
HEREDOC
|
||||
}
|
||||
`,
|
||||
},
|
||||
{
|
||||
description: "Mixing heredoc with regular part",
|
||||
input: `block {
|
||||
heredoc <<HEREDOC
|
||||
Here's more than one space Here's more than one space
|
||||
HEREDOC
|
||||
respond "More than one space will be eaten" 200
|
||||
}
|
||||
|
||||
block2 {
|
||||
heredoc <<HEREDOC
|
||||
Here's more than one space Here's more than one space
|
||||
HEREDOC
|
||||
respond "More than one space will be eaten" 200
|
||||
}
|
||||
`,
|
||||
expect: `block {
|
||||
heredoc <<HEREDOC
|
||||
Here's more than one space Here's more than one space
|
||||
HEREDOC
|
||||
respond "More than one space will be eaten" 200
|
||||
}
|
||||
|
||||
block2 {
|
||||
heredoc <<HEREDOC
|
||||
Here's more than one space Here's more than one space
|
||||
HEREDOC
|
||||
respond "More than one space will be eaten" 200
|
||||
}
|
||||
`,
|
||||
},
|
||||
{
|
||||
description: "Heredoc as regular token",
|
||||
input: `block {
|
||||
heredoc <<HEREDOC "More than one space will be eaten"
|
||||
}
|
||||
`,
|
||||
expect: `block {
|
||||
heredoc <<HEREDOC "More than one space will be eaten"
|
||||
}
|
||||
`,
|
||||
},
|
||||
{
|
||||
description: "Escape heredoc",
|
||||
input: `block {
|
||||
heredoc \<<HEREDOC
|
||||
respond "More than one space will be eaten" 200
|
||||
}
|
||||
`,
|
||||
expect: `block {
|
||||
heredoc \<<HEREDOC
|
||||
respond "More than one space will be eaten" 200
|
||||
}
|
||||
`,
|
||||
},
|
||||
} {
|
||||
// the formatter should output a trailing newline,
|
||||
// even if the tests aren't written to expect that
|
||||
|
|
160
caddyconfig/caddyfile/importargs.go
Normal file
160
caddyconfig/caddyfile/importargs.go
Normal file
|
@ -0,0 +1,160 @@
|
|||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package caddyfile
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
)
|
||||
|
||||
// parseVariadic determines if the token is a variadic placeholder,
|
||||
// and if so, determines the index range (start/end) of args to use.
|
||||
// Returns a boolean signaling whether a variadic placeholder was found,
|
||||
// and the start and end indices.
|
||||
func parseVariadic(token Token, argCount int) (bool, int, int) {
|
||||
if !strings.HasPrefix(token.Text, "{args[") {
|
||||
return false, 0, 0
|
||||
}
|
||||
if !strings.HasSuffix(token.Text, "]}") {
|
||||
return false, 0, 0
|
||||
}
|
||||
|
||||
argRange := strings.TrimSuffix(strings.TrimPrefix(token.Text, "{args["), "]}")
|
||||
if argRange == "" {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Placeholder "+token.Text+" cannot have an empty index",
|
||||
zap.String("file", token.File+":"+strconv.Itoa(token.Line)), zap.Strings("import_chain", token.imports))
|
||||
return false, 0, 0
|
||||
}
|
||||
|
||||
start, end, found := strings.Cut(argRange, ":")
|
||||
|
||||
// If no ":" delimiter is found, this is not a variadic.
|
||||
// The replacer will pick this up.
|
||||
if !found {
|
||||
return false, 0, 0
|
||||
}
|
||||
|
||||
// A valid token may contain several placeholders, and
|
||||
// they may be separated by ":". It's not variadic.
|
||||
// https://github.com/caddyserver/caddy/issues/5716
|
||||
if strings.Contains(start, "}") || strings.Contains(end, "{") {
|
||||
return false, 0, 0
|
||||
}
|
||||
|
||||
var (
|
||||
startIndex = 0
|
||||
endIndex = argCount
|
||||
err error
|
||||
)
|
||||
if start != "" {
|
||||
startIndex, err = strconv.Atoi(start)
|
||||
if err != nil {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Variadic placeholder "+token.Text+" has an invalid start index",
|
||||
zap.String("file", token.File+":"+strconv.Itoa(token.Line)), zap.Strings("import_chain", token.imports))
|
||||
return false, 0, 0
|
||||
}
|
||||
}
|
||||
if end != "" {
|
||||
endIndex, err = strconv.Atoi(end)
|
||||
if err != nil {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Variadic placeholder "+token.Text+" has an invalid end index",
|
||||
zap.String("file", token.File+":"+strconv.Itoa(token.Line)), zap.Strings("import_chain", token.imports))
|
||||
return false, 0, 0
|
||||
}
|
||||
}
|
||||
|
||||
// bound check
|
||||
if startIndex < 0 || startIndex > endIndex || endIndex > argCount {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Variadic placeholder "+token.Text+" indices are out of bounds, only "+strconv.Itoa(argCount)+" argument(s) exist",
|
||||
zap.String("file", token.File+":"+strconv.Itoa(token.Line)), zap.Strings("import_chain", token.imports))
|
||||
return false, 0, 0
|
||||
}
|
||||
return true, startIndex, endIndex
|
||||
}
|
||||
|
||||
// makeArgsReplacer prepares a Replacer which can replace
|
||||
// non-variadic args placeholders in imported tokens.
|
||||
func makeArgsReplacer(args []string) *caddy.Replacer {
|
||||
repl := caddy.NewEmptyReplacer()
|
||||
repl.Map(func(key string) (any, bool) {
|
||||
// TODO: Remove the deprecated {args.*} placeholder
|
||||
// support at some point in the future
|
||||
if matches := argsRegexpIndexDeprecated.FindStringSubmatch(key); len(matches) > 0 {
|
||||
// What's matched may be a substring of the key
|
||||
if matches[0] != key {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
value, err := strconv.Atoi(matches[1])
|
||||
if err != nil {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Placeholder {args." + matches[1] + "} has an invalid index")
|
||||
return nil, false
|
||||
}
|
||||
if value >= len(args) {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Placeholder {args." + matches[1] + "} index is out of bounds, only " + strconv.Itoa(len(args)) + " argument(s) exist")
|
||||
return nil, false
|
||||
}
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Placeholder {args." + matches[1] + "} deprecated, use {args[" + matches[1] + "]} instead")
|
||||
return args[value], true
|
||||
}
|
||||
|
||||
// Handle args[*] form
|
||||
if matches := argsRegexpIndex.FindStringSubmatch(key); len(matches) > 0 {
|
||||
// What's matched may be a substring of the key
|
||||
if matches[0] != key {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
if strings.Contains(matches[1], ":") {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Variadic placeholder {args[" + matches[1] + "]} must be a token on its own")
|
||||
return nil, false
|
||||
}
|
||||
value, err := strconv.Atoi(matches[1])
|
||||
if err != nil {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Placeholder {args[" + matches[1] + "]} has an invalid index")
|
||||
return nil, false
|
||||
}
|
||||
if value >= len(args) {
|
||||
caddy.Log().Named("caddyfile").Warn(
|
||||
"Placeholder {args[" + matches[1] + "]} index is out of bounds, only " + strconv.Itoa(len(args)) + " argument(s) exist")
|
||||
return nil, false
|
||||
}
|
||||
return args[value], true
|
||||
}
|
||||
|
||||
// Not an args placeholder, ignore
|
||||
return nil, false
|
||||
})
|
||||
return repl
|
||||
}
|
||||
|
||||
var (
|
||||
argsRegexpIndexDeprecated = regexp.MustCompile(`args\.(.+)`)
|
||||
argsRegexpIndex = regexp.MustCompile(`args\[(.+)]`)
|
||||
)
|
126
caddyconfig/caddyfile/importgraph.go
Normal file
126
caddyconfig/caddyfile/importgraph.go
Normal file
|
@ -0,0 +1,126 @@
|
|||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package caddyfile
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"slices"
|
||||
)
|
||||
|
||||
type adjacency map[string][]string
|
||||
|
||||
type importGraph struct {
|
||||
nodes map[string]struct{}
|
||||
edges adjacency
|
||||
}
|
||||
|
||||
func (i *importGraph) addNode(name string) {
|
||||
if i.nodes == nil {
|
||||
i.nodes = make(map[string]struct{})
|
||||
}
|
||||
if _, exists := i.nodes[name]; exists {
|
||||
return
|
||||
}
|
||||
i.nodes[name] = struct{}{}
|
||||
}
|
||||
|
||||
func (i *importGraph) addNodes(names []string) {
|
||||
for _, name := range names {
|
||||
i.addNode(name)
|
||||
}
|
||||
}
|
||||
|
||||
func (i *importGraph) removeNode(name string) {
|
||||
delete(i.nodes, name)
|
||||
}
|
||||
|
||||
func (i *importGraph) removeNodes(names []string) {
|
||||
for _, name := range names {
|
||||
i.removeNode(name)
|
||||
}
|
||||
}
|
||||
|
||||
func (i *importGraph) addEdge(from, to string) error {
|
||||
if !i.exists(from) || !i.exists(to) {
|
||||
return fmt.Errorf("one of the nodes does not exist")
|
||||
}
|
||||
|
||||
if i.willCycle(to, from) {
|
||||
return fmt.Errorf("a cycle of imports exists between %s and %s", from, to)
|
||||
}
|
||||
|
||||
if i.areConnected(from, to) {
|
||||
// if connected, there's nothing to do
|
||||
return nil
|
||||
}
|
||||
|
||||
if i.nodes == nil {
|
||||
i.nodes = make(map[string]struct{})
|
||||
}
|
||||
if i.edges == nil {
|
||||
i.edges = make(adjacency)
|
||||
}
|
||||
|
||||
i.edges[from] = append(i.edges[from], to)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *importGraph) addEdges(from string, tos []string) error {
|
||||
for _, to := range tos {
|
||||
err := i.addEdge(from, to)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *importGraph) areConnected(from, to string) bool {
|
||||
al, ok := i.edges[from]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
return slices.Contains(al, to)
|
||||
}
|
||||
|
||||
func (i *importGraph) willCycle(from, to string) bool {
|
||||
collector := make(map[string]bool)
|
||||
|
||||
var visit func(string)
|
||||
visit = func(start string) {
|
||||
if !collector[start] {
|
||||
collector[start] = true
|
||||
for _, v := range i.edges[start] {
|
||||
visit(v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for _, v := range i.edges[from] {
|
||||
visit(v)
|
||||
}
|
||||
for k := range collector {
|
||||
if to == k {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (i *importGraph) exists(key string) bool {
|
||||
_, exists := i.nodes[key]
|
||||
return exists
|
||||
}
|
272
caddyconfig/caddyfile/lexer.go
Executable file → Normal file
272
caddyconfig/caddyfile/lexer.go
Executable file → Normal file
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2015 Light Code Labs, LLC
|
||||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
|
@ -16,7 +16,11 @@ package caddyfile
|
|||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
|
@ -34,12 +38,41 @@ type (
|
|||
|
||||
// Token represents a single parsable unit.
|
||||
Token struct {
|
||||
File string
|
||||
Line int
|
||||
Text string
|
||||
File string
|
||||
imports []string
|
||||
Line int
|
||||
Text string
|
||||
wasQuoted rune // enclosing quote character, if any
|
||||
heredocMarker string
|
||||
snippetName string
|
||||
}
|
||||
)
|
||||
|
||||
// Tokenize takes bytes as input and lexes it into
|
||||
// a list of tokens that can be parsed as a Caddyfile.
|
||||
// Also takes a filename to fill the token's File as
|
||||
// the source of the tokens, which is important to
|
||||
// determine relative paths for `import` directives.
|
||||
func Tokenize(input []byte, filename string) ([]Token, error) {
|
||||
l := lexer{}
|
||||
if err := l.load(bytes.NewReader(input)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var tokens []Token
|
||||
for {
|
||||
found, err := l.next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !found {
|
||||
break
|
||||
}
|
||||
l.token.File = filename
|
||||
tokens = append(tokens, l.token)
|
||||
}
|
||||
return tokens, nil
|
||||
}
|
||||
|
||||
// load prepares the lexer to scan an input for tokens.
|
||||
// It discards any leading byte order mark.
|
||||
func (l *lexer) load(input io.Reader) error {
|
||||
|
@ -71,34 +104,114 @@ func (l *lexer) load(input io.Reader) error {
|
|||
// may be escaped. The rest of the line is skipped
|
||||
// if a "#" character is read in. Returns true if
|
||||
// a token was loaded; false otherwise.
|
||||
func (l *lexer) next() bool {
|
||||
func (l *lexer) next() (bool, error) {
|
||||
var val []rune
|
||||
var comment, quoted, escaped bool
|
||||
var comment, quoted, btQuoted, inHeredoc, heredocEscaped, escaped bool
|
||||
var heredocMarker string
|
||||
|
||||
makeToken := func() bool {
|
||||
makeToken := func(quoted rune) bool {
|
||||
l.token.Text = string(val)
|
||||
l.token.wasQuoted = quoted
|
||||
l.token.heredocMarker = heredocMarker
|
||||
return true
|
||||
}
|
||||
|
||||
for {
|
||||
// Read a character in; if err then if we had
|
||||
// read some characters, make a token. If we
|
||||
// reached EOF, then no more tokens to read.
|
||||
// If no EOF, then we had a problem.
|
||||
ch, _, err := l.reader.ReadRune()
|
||||
if err != nil {
|
||||
if len(val) > 0 {
|
||||
return makeToken()
|
||||
if inHeredoc {
|
||||
return false, fmt.Errorf("incomplete heredoc <<%s on line #%d, expected ending marker %s", heredocMarker, l.line+l.skippedLines, heredocMarker)
|
||||
}
|
||||
|
||||
return makeToken(0), nil
|
||||
}
|
||||
if err == io.EOF {
|
||||
return false
|
||||
return false, nil
|
||||
}
|
||||
panic(err)
|
||||
return false, err
|
||||
}
|
||||
|
||||
if !escaped && ch == '\\' {
|
||||
// detect whether we have the start of a heredoc
|
||||
if !(quoted || btQuoted) && !(inHeredoc || heredocEscaped) &&
|
||||
len(val) > 1 && string(val[:2]) == "<<" {
|
||||
// a space means it's just a regular token and not a heredoc
|
||||
if ch == ' ' {
|
||||
return makeToken(0), nil
|
||||
}
|
||||
|
||||
// skip CR, we only care about LF
|
||||
if ch == '\r' {
|
||||
continue
|
||||
}
|
||||
|
||||
// after hitting a newline, we know that the heredoc marker
|
||||
// is the characters after the two << and the newline.
|
||||
// we reset the val because the heredoc is syntax we don't
|
||||
// want to keep.
|
||||
if ch == '\n' {
|
||||
if len(val) == 2 {
|
||||
return false, fmt.Errorf("missing opening heredoc marker on line #%d; must contain only alpha-numeric characters, dashes and underscores; got empty string", l.line)
|
||||
}
|
||||
|
||||
// check if there's too many <
|
||||
if string(val[:3]) == "<<<" {
|
||||
return false, fmt.Errorf("too many '<' for heredoc on line #%d; only use two, for example <<END", l.line)
|
||||
}
|
||||
|
||||
heredocMarker = string(val[2:])
|
||||
if !heredocMarkerRegexp.Match([]byte(heredocMarker)) {
|
||||
return false, fmt.Errorf("heredoc marker on line #%d must contain only alpha-numeric characters, dashes and underscores; got '%s'", l.line, heredocMarker)
|
||||
}
|
||||
|
||||
inHeredoc = true
|
||||
l.skippedLines++
|
||||
val = nil
|
||||
continue
|
||||
}
|
||||
val = append(val, ch)
|
||||
continue
|
||||
}
|
||||
|
||||
// if we're in a heredoc, all characters are read as-is
|
||||
if inHeredoc {
|
||||
val = append(val, ch)
|
||||
|
||||
if ch == '\n' {
|
||||
l.skippedLines++
|
||||
}
|
||||
|
||||
// check if we're done, i.e. that the last few characters are the marker
|
||||
if len(val) >= len(heredocMarker) && heredocMarker == string(val[len(val)-len(heredocMarker):]) {
|
||||
// set the final value
|
||||
val, err = l.finalizeHeredoc(val, heredocMarker)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// set the line counter, and make the token
|
||||
l.line += l.skippedLines
|
||||
l.skippedLines = 0
|
||||
return makeToken('<'), nil
|
||||
}
|
||||
|
||||
// stay in the heredoc until we find the ending marker
|
||||
continue
|
||||
}
|
||||
|
||||
// track whether we found an escape '\' for the next
|
||||
// iteration to be contextually aware
|
||||
if !escaped && !btQuoted && ch == '\\' {
|
||||
escaped = true
|
||||
continue
|
||||
}
|
||||
|
||||
if quoted {
|
||||
if escaped {
|
||||
if quoted || btQuoted {
|
||||
if quoted && escaped {
|
||||
// all is literal in quoted area,
|
||||
// so only escape quotes
|
||||
if ch != '"' {
|
||||
|
@ -106,23 +219,29 @@ func (l *lexer) next() bool {
|
|||
}
|
||||
escaped = false
|
||||
} else {
|
||||
if ch == '"' {
|
||||
return makeToken()
|
||||
if (quoted && ch == '"') || (btQuoted && ch == '`') {
|
||||
return makeToken(ch), nil
|
||||
}
|
||||
}
|
||||
// allow quoted text to wrap continue on multiple lines
|
||||
if ch == '\n' {
|
||||
l.line += 1 + l.skippedLines
|
||||
l.skippedLines = 0
|
||||
}
|
||||
// collect this character as part of the quoted token
|
||||
val = append(val, ch)
|
||||
continue
|
||||
}
|
||||
|
||||
if unicode.IsSpace(ch) {
|
||||
// ignore CR altogether, we only actually care about LF (\n)
|
||||
if ch == '\r' {
|
||||
continue
|
||||
}
|
||||
// end of the line
|
||||
if ch == '\n' {
|
||||
// newlines can be escaped to chain arguments
|
||||
// onto multiple lines; else, increment the line count
|
||||
if escaped {
|
||||
l.skippedLines++
|
||||
escaped = false
|
||||
|
@ -130,15 +249,19 @@ func (l *lexer) next() bool {
|
|||
l.line += 1 + l.skippedLines
|
||||
l.skippedLines = 0
|
||||
}
|
||||
// comments (#) are single-line only
|
||||
comment = false
|
||||
}
|
||||
// any kind of space means we're at the end of this token
|
||||
if len(val) > 0 {
|
||||
return makeToken()
|
||||
return makeToken(0), nil
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if ch == '#' {
|
||||
// comments must be at the start of a token,
|
||||
// in other words, preceded by space or newline
|
||||
if ch == '#' && len(val) == 0 {
|
||||
comment = true
|
||||
}
|
||||
if comment {
|
||||
|
@ -151,13 +274,126 @@ func (l *lexer) next() bool {
|
|||
quoted = true
|
||||
continue
|
||||
}
|
||||
if ch == '`' {
|
||||
btQuoted = true
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if escaped {
|
||||
val = append(val, '\\')
|
||||
// allow escaping the first < to skip the heredoc syntax
|
||||
if ch == '<' {
|
||||
heredocEscaped = true
|
||||
} else {
|
||||
val = append(val, '\\')
|
||||
}
|
||||
escaped = false
|
||||
}
|
||||
|
||||
val = append(val, ch)
|
||||
}
|
||||
}
|
||||
|
||||
// finalizeHeredoc takes the runes read as the heredoc text and the marker,
|
||||
// and processes the text to strip leading whitespace, returning the final
|
||||
// value without the leading whitespace.
|
||||
func (l *lexer) finalizeHeredoc(val []rune, marker string) ([]rune, error) {
|
||||
stringVal := string(val)
|
||||
|
||||
// find the last newline of the heredoc, which is where the contents end
|
||||
lastNewline := strings.LastIndex(stringVal, "\n")
|
||||
|
||||
// collapse the content, then split into separate lines
|
||||
lines := strings.Split(stringVal[:lastNewline+1], "\n")
|
||||
|
||||
// figure out how much whitespace we need to strip from the front of every line
|
||||
// by getting the string that precedes the marker, on the last line
|
||||
paddingToStrip := stringVal[lastNewline+1 : len(stringVal)-len(marker)]
|
||||
|
||||
// iterate over each line and strip the whitespace from the front
|
||||
var out string
|
||||
for lineNum, lineText := range lines[:len(lines)-1] {
|
||||
if lineText == "" || lineText == "\r" {
|
||||
out += "\n"
|
||||
continue
|
||||
}
|
||||
|
||||
// find an exact match for the padding
|
||||
index := strings.Index(lineText, paddingToStrip)
|
||||
|
||||
// if the padding doesn't match exactly at the start then we can't safely strip
|
||||
if index != 0 {
|
||||
return nil, fmt.Errorf("mismatched leading whitespace in heredoc <<%s on line #%d [%s], expected whitespace [%s] to match the closing marker", marker, l.line+lineNum+1, lineText, paddingToStrip)
|
||||
}
|
||||
|
||||
// strip, then append the line, with the newline, to the output.
|
||||
// also removes all "\r" because Windows.
|
||||
out += strings.ReplaceAll(lineText[len(paddingToStrip):]+"\n", "\r", "")
|
||||
}
|
||||
|
||||
// Remove the trailing newline from the loop
|
||||
if len(out) > 0 && out[len(out)-1] == '\n' {
|
||||
out = out[:len(out)-1]
|
||||
}
|
||||
|
||||
// return the final value
|
||||
return []rune(out), nil
|
||||
}
|
||||
|
||||
// Quoted returns true if the token was enclosed in quotes
|
||||
// (i.e. double quotes, backticks, or heredoc).
|
||||
func (t Token) Quoted() bool {
|
||||
return t.wasQuoted > 0
|
||||
}
|
||||
|
||||
// NumLineBreaks counts how many line breaks are in the token text.
|
||||
func (t Token) NumLineBreaks() int {
|
||||
lineBreaks := strings.Count(t.Text, "\n")
|
||||
if t.wasQuoted == '<' {
|
||||
// heredocs have an extra linebreak because the opening
|
||||
// delimiter is on its own line and is not included in the
|
||||
// token Text itself, and the trailing newline is removed.
|
||||
lineBreaks += 2
|
||||
}
|
||||
return lineBreaks
|
||||
}
|
||||
|
||||
// Clone returns a deep copy of the token.
|
||||
func (t Token) Clone() Token {
|
||||
return Token{
|
||||
File: t.File,
|
||||
imports: append([]string{}, t.imports...),
|
||||
Line: t.Line,
|
||||
Text: t.Text,
|
||||
wasQuoted: t.wasQuoted,
|
||||
heredocMarker: t.heredocMarker,
|
||||
snippetName: t.snippetName,
|
||||
}
|
||||
}
|
||||
|
||||
var heredocMarkerRegexp = regexp.MustCompile("^[A-Za-z0-9_-]+$")
|
||||
|
||||
// isNextOnNewLine tests whether t2 is on a different line from t1
|
||||
func isNextOnNewLine(t1, t2 Token) bool {
|
||||
// If the second token is from a different file,
|
||||
// we can assume it's from a different line
|
||||
if t1.File != t2.File {
|
||||
return true
|
||||
}
|
||||
|
||||
// If the second token is from a different import chain,
|
||||
// we can assume it's from a different line
|
||||
if len(t1.imports) != len(t2.imports) {
|
||||
return true
|
||||
}
|
||||
for i, im := range t1.imports {
|
||||
if im != t2.imports[i] {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// If the first token (incl line breaks) ends
|
||||
// on a line earlier than the next token,
|
||||
// then the second token is on a new line
|
||||
return t1.Line+t1.NumLineBreaks() < t2.Line
|
||||
}
|
||||
|
|
28
caddyconfig/caddyfile/lexer_fuzz.go
Normal file
28
caddyconfig/caddyfile/lexer_fuzz.go
Normal file
|
@ -0,0 +1,28 @@
|
|||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//go:build gofuzz
|
||||
|
||||
package caddyfile
|
||||
|
||||
func FuzzTokenize(input []byte) int {
|
||||
tokens, err := Tokenize(input, "Caddyfile")
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
if len(tokens) == 0 {
|
||||
return -1
|
||||
}
|
||||
return 1
|
||||
}
|
401
caddyconfig/caddyfile/lexer_test.go
Executable file → Normal file
401
caddyconfig/caddyfile/lexer_test.go
Executable file → Normal file
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2015 Light Code Labs, LLC
|
||||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
|
@ -15,37 +15,35 @@
|
|||
package caddyfile
|
||||
|
||||
import (
|
||||
"log"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type lexerTestCase struct {
|
||||
input string
|
||||
expected []Token
|
||||
}
|
||||
|
||||
func TestLexer(t *testing.T) {
|
||||
testCases := []lexerTestCase{
|
||||
testCases := []struct {
|
||||
input []byte
|
||||
expected []Token
|
||||
expectErr bool
|
||||
errorMessage string
|
||||
}{
|
||||
{
|
||||
input: `host:123`,
|
||||
input: []byte(`host:123`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "host:123"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `host:123
|
||||
input: []byte(`host:123
|
||||
|
||||
directive`,
|
||||
directive`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "host:123"},
|
||||
{Line: 3, Text: "directive"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `host:123 {
|
||||
input: []byte(`host:123 {
|
||||
directive
|
||||
}`,
|
||||
}`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "host:123"},
|
||||
{Line: 1, Text: "{"},
|
||||
|
@ -54,7 +52,7 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: `host:123 { directive }`,
|
||||
input: []byte(`host:123 { directive }`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "host:123"},
|
||||
{Line: 1, Text: "{"},
|
||||
|
@ -63,12 +61,12 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: `host:123 {
|
||||
input: []byte(`host:123 {
|
||||
#comment
|
||||
directive
|
||||
# comment
|
||||
foobar # another comment
|
||||
}`,
|
||||
}`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "host:123"},
|
||||
{Line: 1, Text: "{"},
|
||||
|
@ -78,8 +76,28 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: `a "quoted value" b
|
||||
foobar`,
|
||||
input: []byte(`host:123 {
|
||||
# hash inside string is not a comment
|
||||
redir / /some/#/path
|
||||
}`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "host:123"},
|
||||
{Line: 1, Text: "{"},
|
||||
{Line: 3, Text: "redir"},
|
||||
{Line: 3, Text: "/"},
|
||||
{Line: 3, Text: "/some/#/path"},
|
||||
{Line: 4, Text: "}"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte("# comment at beginning of file\n# comment at beginning of line\nhost:123"),
|
||||
expected: []Token{
|
||||
{Line: 3, Text: "host:123"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`a "quoted value" b
|
||||
foobar`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "a"},
|
||||
{Line: 1, Text: "quoted value"},
|
||||
|
@ -88,7 +106,7 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: `A "quoted \"value\" inside" B`,
|
||||
input: []byte(`A "quoted \"value\" inside" B`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "A"},
|
||||
{Line: 1, Text: `quoted "value" inside`},
|
||||
|
@ -96,7 +114,7 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: "An escaped \"newline\\\ninside\" quotes",
|
||||
input: []byte("An escaped \"newline\\\ninside\" quotes"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "An"},
|
||||
{Line: 1, Text: "escaped"},
|
||||
|
@ -105,7 +123,7 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: "An escaped newline\\\noutside quotes",
|
||||
input: []byte("An escaped newline\\\noutside quotes"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "An"},
|
||||
{Line: 1, Text: "escaped"},
|
||||
|
@ -115,7 +133,7 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: "line1\\\nescaped\nline2\nline3",
|
||||
input: []byte("line1\\\nescaped\nline2\nline3"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "line1"},
|
||||
{Line: 1, Text: "escaped"},
|
||||
|
@ -124,7 +142,7 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: "line1\\\nescaped1\\\nescaped2\nline4\nline5",
|
||||
input: []byte("line1\\\nescaped1\\\nescaped2\nline4\nline5"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "line1"},
|
||||
{Line: 1, Text: "escaped1"},
|
||||
|
@ -134,34 +152,34 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: `"unescapable\ in quotes"`,
|
||||
input: []byte(`"unescapable\ in quotes"`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `unescapable\ in quotes`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `"don't\escape"`,
|
||||
input: []byte(`"don't\escape"`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `don't\escape`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `"don't\\escape"`,
|
||||
input: []byte(`"don't\\escape"`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `don't\\escape`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `un\escapable`,
|
||||
input: []byte(`un\escapable`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `un\escapable`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `A "quoted value with line
|
||||
input: []byte(`A "quoted value with line
|
||||
break inside" {
|
||||
foobar
|
||||
}`,
|
||||
}`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "A"},
|
||||
{Line: 1, Text: "quoted value with line\n\t\t\t\t\tbreak inside"},
|
||||
|
@ -171,13 +189,13 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: `"C:\php\php-cgi.exe"`,
|
||||
input: []byte(`"C:\php\php-cgi.exe"`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `C:\php\php-cgi.exe`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `empty "" string`,
|
||||
input: []byte(`empty "" string`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `empty`},
|
||||
{Line: 1, Text: ``},
|
||||
|
@ -185,7 +203,7 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: "skip those\r\nCR characters",
|
||||
input: []byte("skip those\r\nCR characters"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "skip"},
|
||||
{Line: 1, Text: "those"},
|
||||
|
@ -194,43 +212,328 @@ func TestLexer(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
input: "\xEF\xBB\xBF:8080", // test with leading byte order mark
|
||||
input: []byte("\xEF\xBB\xBF:8080"), // test with leading byte order mark
|
||||
expected: []Token{
|
||||
{Line: 1, Text: ":8080"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte("simple `backtick quoted` string"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `simple`},
|
||||
{Line: 1, Text: `backtick quoted`},
|
||||
{Line: 1, Text: `string`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte("multiline `backtick\nquoted\n` string"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `multiline`},
|
||||
{Line: 1, Text: "backtick\nquoted\n"},
|
||||
{Line: 3, Text: `string`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte("nested `\"quotes inside\" backticks` string"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `nested`},
|
||||
{Line: 1, Text: `"quotes inside" backticks`},
|
||||
{Line: 1, Text: `string`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte("reverse-nested \"`backticks` inside\" quotes"),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `reverse-nested`},
|
||||
{Line: 1, Text: "`backticks` inside"},
|
||||
{Line: 1, Text: `quotes`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
content
|
||||
EOF same-line-arg
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `heredoc`},
|
||||
{Line: 1, Text: "content"},
|
||||
{Line: 3, Text: `same-line-arg`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<VERY-LONG-MARKER
|
||||
content
|
||||
VERY-LONG-MARKER same-line-arg
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `heredoc`},
|
||||
{Line: 1, Text: "content"},
|
||||
{Line: 3, Text: `same-line-arg`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
extra-newline
|
||||
|
||||
EOF same-line-arg
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `heredoc`},
|
||||
{Line: 1, Text: "extra-newline\n"},
|
||||
{Line: 4, Text: `same-line-arg`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
EOF
|
||||
HERE same-line-arg
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `heredoc`},
|
||||
{Line: 1, Text: ``},
|
||||
{Line: 3, Text: `HERE`},
|
||||
{Line: 3, Text: `same-line-arg`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
EOF same-line-arg
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `heredoc`},
|
||||
{Line: 1, Text: ""},
|
||||
{Line: 2, Text: `same-line-arg`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
content
|
||||
EOF same-line-arg
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `heredoc`},
|
||||
{Line: 1, Text: "content"},
|
||||
{Line: 3, Text: `same-line-arg`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`prev-line
|
||||
heredoc <<EOF
|
||||
multi
|
||||
line
|
||||
content
|
||||
EOF same-line-arg
|
||||
next-line
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `prev-line`},
|
||||
{Line: 2, Text: `heredoc`},
|
||||
{Line: 2, Text: "\tmulti\n\tline\n\tcontent"},
|
||||
{Line: 6, Text: `same-line-arg`},
|
||||
{Line: 7, Text: `next-line`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`escaped-heredoc \<< >>`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `escaped-heredoc`},
|
||||
{Line: 1, Text: `<<`},
|
||||
{Line: 1, Text: `>>`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`not-a-heredoc <EOF
|
||||
content
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `not-a-heredoc`},
|
||||
{Line: 1, Text: `<EOF`},
|
||||
{Line: 2, Text: `content`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`not-a-heredoc <<<EOF content`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `not-a-heredoc`},
|
||||
{Line: 1, Text: `<<<EOF`},
|
||||
{Line: 1, Text: `content`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`not-a-heredoc "<<" ">>"`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `not-a-heredoc`},
|
||||
{Line: 1, Text: `<<`},
|
||||
{Line: 1, Text: `>>`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`not-a-heredoc << >>`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `not-a-heredoc`},
|
||||
{Line: 1, Text: `<<`},
|
||||
{Line: 1, Text: `>>`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`not-a-heredoc <<HERE SAME LINE
|
||||
content
|
||||
HERE same-line-arg
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `not-a-heredoc`},
|
||||
{Line: 1, Text: `<<HERE`},
|
||||
{Line: 1, Text: `SAME`},
|
||||
{Line: 1, Text: `LINE`},
|
||||
{Line: 2, Text: `content`},
|
||||
{Line: 3, Text: `HERE`},
|
||||
{Line: 3, Text: `same-line-arg`},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<s
|
||||
<EFBFBD>
|
||||
s
|
||||
`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: `heredoc`},
|
||||
{Line: 1, Text: "<22>"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte("\u000Aheredoc \u003C\u003C\u0073\u0073\u000A\u00BF\u0057\u0001\u0000\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u003D\u001F\u000A\u0073\u0073\u000A\u00BF\u0057\u0001\u0000\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u003D\u001F\u000A\u00BF\u00BF\u0057\u0001\u0000\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u003D\u001F"),
|
||||
expected: []Token{
|
||||
{
|
||||
Line: 2,
|
||||
Text: "heredoc",
|
||||
},
|
||||
{
|
||||
Line: 2,
|
||||
Text: "\u00BF\u0057\u0001\u0000\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u003D\u001F",
|
||||
},
|
||||
{
|
||||
Line: 5,
|
||||
Text: "\u00BF\u0057\u0001\u0000\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u003D\u001F",
|
||||
},
|
||||
{
|
||||
Line: 6,
|
||||
Text: "\u00BF\u00BF\u0057\u0001\u0000\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u00FF\u003D\u001F",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte("not-a-heredoc <<\n"),
|
||||
expectErr: true,
|
||||
errorMessage: "missing opening heredoc marker on line #1; must contain only alpha-numeric characters, dashes and underscores; got empty string",
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<<EOF
|
||||
content
|
||||
EOF same-line-arg
|
||||
`),
|
||||
expectErr: true,
|
||||
errorMessage: "too many '<' for heredoc on line #1; only use two, for example <<END",
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
content
|
||||
`),
|
||||
expectErr: true,
|
||||
errorMessage: "incomplete heredoc <<EOF on line #3, expected ending marker EOF",
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
content
|
||||
EOF
|
||||
`),
|
||||
expectErr: true,
|
||||
errorMessage: "mismatched leading whitespace in heredoc <<EOF on line #2 [\tcontent], expected whitespace [\t\t] to match the closing marker",
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
content
|
||||
EOF
|
||||
`),
|
||||
expectErr: true,
|
||||
errorMessage: "mismatched leading whitespace in heredoc <<EOF on line #2 [ content], expected whitespace [\t\t] to match the closing marker",
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
The next line is a blank line
|
||||
|
||||
The previous line is a blank line
|
||||
EOF`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "heredoc"},
|
||||
{Line: 1, Text: "The next line is a blank line\n\nThe previous line is a blank line"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
One tab indented heredoc with blank next line
|
||||
|
||||
One tab indented heredoc with blank previous line
|
||||
EOF`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "heredoc"},
|
||||
{Line: 1, Text: "One tab indented heredoc with blank next line\n\nOne tab indented heredoc with blank previous line"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
The next line is a blank line with one tab
|
||||
|
||||
The previous line is a blank line with one tab
|
||||
EOF`),
|
||||
expected: []Token{
|
||||
{Line: 1, Text: "heredoc"},
|
||||
{Line: 1, Text: "The next line is a blank line with one tab\n\t\nThe previous line is a blank line with one tab"},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: []byte(`heredoc <<EOF
|
||||
The next line is a blank line with one tab less than the correct indentation
|
||||
|
||||
The previous line is a blank line with one tab less than the correct indentation
|
||||
EOF`),
|
||||
expectErr: true,
|
||||
errorMessage: "mismatched leading whitespace in heredoc <<EOF on line #3 [\t], expected whitespace [\t\t] to match the closing marker",
|
||||
},
|
||||
}
|
||||
|
||||
for i, testCase := range testCases {
|
||||
actual := tokenize(testCase.input)
|
||||
actual, err := Tokenize(testCase.input, "")
|
||||
if testCase.expectErr {
|
||||
if err == nil {
|
||||
t.Fatalf("expected error, got actual: %v", actual)
|
||||
continue
|
||||
}
|
||||
if err.Error() != testCase.errorMessage {
|
||||
t.Fatalf("expected error '%v', got: %v", testCase.errorMessage, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
lexerCompare(t, i, testCase.expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
func tokenize(input string) (tokens []Token) {
|
||||
l := lexer{}
|
||||
if err := l.load(strings.NewReader(input)); err != nil {
|
||||
log.Printf("[ERROR] load failed: %v", err)
|
||||
}
|
||||
for l.next() {
|
||||
tokens = append(tokens, l.token)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func lexerCompare(t *testing.T, n int, expected, actual []Token) {
|
||||
if len(expected) != len(actual) {
|
||||
t.Errorf("Test case %d: expected %d token(s) but got %d", n, len(expected), len(actual))
|
||||
t.Fatalf("Test case %d: expected %d token(s) but got %d", n, len(expected), len(actual))
|
||||
}
|
||||
|
||||
for i := 0; i < len(actual) && i < len(expected); i++ {
|
||||
if actual[i].Line != expected[i].Line {
|
||||
t.Errorf("Test case %d token %d ('%s'): expected line %d but was line %d",
|
||||
t.Fatalf("Test case %d token %d ('%s'): expected line %d but was line %d",
|
||||
n, i, expected[i].Text, expected[i].Line, actual[i].Line)
|
||||
break
|
||||
}
|
||||
if actual[i].Text != expected[i].Text {
|
||||
t.Errorf("Test case %d token %d: expected text '%s' but was '%s'",
|
||||
t.Fatalf("Test case %d token %d: expected text '%s' but was '%s'",
|
||||
n, i, expected[i].Text, actual[i].Text)
|
||||
break
|
||||
}
|
||||
|
|
414
caddyconfig/caddyfile/parse.go
Executable file → Normal file
414
caddyconfig/caddyfile/parse.go
Executable file → Normal file
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2015 Light Code Labs, LLC
|
||||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
|
@ -16,11 +16,15 @@ package caddyfile
|
|||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
)
|
||||
|
||||
// Parse parses the input just enough to group tokens, in
|
||||
|
@ -33,16 +37,36 @@ import (
|
|||
// Environment variables in {$ENVIRONMENT_VARIABLE} notation
|
||||
// will be replaced before parsing begins.
|
||||
func Parse(filename string, input []byte) ([]ServerBlock, error) {
|
||||
tokens, err := allTokens(filename, input)
|
||||
// unfortunately, we must copy the input because parsing must
|
||||
// remain a read-only operation, but we have to expand environment
|
||||
// variables before we parse, which changes the underlying array (#4422)
|
||||
inputCopy := make([]byte, len(input))
|
||||
copy(inputCopy, input)
|
||||
|
||||
tokens, err := allTokens(filename, inputCopy)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
p := parser{Dispenser: NewDispenser(tokens)}
|
||||
p := parser{
|
||||
Dispenser: NewDispenser(tokens),
|
||||
importGraph: importGraph{
|
||||
nodes: make(map[string]struct{}),
|
||||
edges: make(adjacency),
|
||||
},
|
||||
}
|
||||
return p.parseAll()
|
||||
}
|
||||
|
||||
// allTokens lexes the entire input, but does not parse it.
|
||||
// It returns all the tokens from the input, unstructured
|
||||
// and in order. It may mutate input as it expands env vars.
|
||||
func allTokens(filename string, input []byte) ([]Token, error) {
|
||||
return Tokenize(replaceEnvVars(input), filename)
|
||||
}
|
||||
|
||||
// replaceEnvVars replaces all occurrences of environment variables.
|
||||
func replaceEnvVars(input []byte) ([]byte, error) {
|
||||
// It mutates the underlying array and returns the updated slice.
|
||||
func replaceEnvVars(input []byte) []byte {
|
||||
var offset int
|
||||
for {
|
||||
begin := bytes.Index(input[offset:], spanOpen)
|
||||
|
@ -57,44 +81,33 @@ func replaceEnvVars(input []byte) ([]byte, error) {
|
|||
end += begin + len(spanOpen) // make end relative to input, not begin
|
||||
|
||||
// get the name; if there is no name, skip it
|
||||
envVarName := input[begin+len(spanOpen) : end]
|
||||
if len(envVarName) == 0 {
|
||||
envString := input[begin+len(spanOpen) : end]
|
||||
if len(envString) == 0 {
|
||||
offset = end + len(spanClose)
|
||||
continue
|
||||
}
|
||||
|
||||
// split the string into a key and an optional default
|
||||
envParts := strings.SplitN(string(envString), envVarDefaultDelimiter, 2)
|
||||
|
||||
// do a lookup for the env var, replace with the default if not found
|
||||
envVarValue, found := os.LookupEnv(envParts[0])
|
||||
if !found && len(envParts) == 2 {
|
||||
envVarValue = envParts[1]
|
||||
}
|
||||
|
||||
// get the value of the environment variable
|
||||
envVarValue := []byte(os.ExpandEnv(os.Getenv(string(envVarName))))
|
||||
// note that this causes one-level deep chaining
|
||||
envVarBytes := []byte(envVarValue)
|
||||
|
||||
// splice in the value
|
||||
input = append(input[:begin],
|
||||
append(envVarValue, input[end+len(spanClose):]...)...)
|
||||
append(envVarBytes, input[end+len(spanClose):]...)...)
|
||||
|
||||
// continue at the end of the replacement
|
||||
offset = begin + len(envVarValue)
|
||||
offset = begin + len(envVarBytes)
|
||||
}
|
||||
return input, nil
|
||||
}
|
||||
|
||||
// allTokens lexes the entire input, but does not parse it.
|
||||
// It returns all the tokens from the input, unstructured
|
||||
// and in order.
|
||||
func allTokens(filename string, input []byte) ([]Token, error) {
|
||||
input, err := replaceEnvVars(input)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
l := new(lexer)
|
||||
err = l.load(bytes.NewReader(input))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var tokens []Token
|
||||
for l.next() {
|
||||
l.token.File = filename
|
||||
tokens = append(tokens, l.token)
|
||||
}
|
||||
return tokens, nil
|
||||
return input
|
||||
}
|
||||
|
||||
type parser struct {
|
||||
|
@ -103,6 +116,7 @@ type parser struct {
|
|||
eof bool // if we encounter a valid EOF in a hard place
|
||||
definedSnippets map[string][]Token
|
||||
nesting int
|
||||
importGraph importGraph
|
||||
}
|
||||
|
||||
func (p *parser) parseAll() ([]ServerBlock, error) {
|
||||
|
@ -135,7 +149,6 @@ func (p *parser) begin() error {
|
|||
}
|
||||
|
||||
err := p.addresses()
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -146,6 +159,25 @@ func (p *parser) begin() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
if ok, name := p.isNamedRoute(); ok {
|
||||
// we just need a dummy leading token to ease parsing later
|
||||
nameToken := p.Token()
|
||||
nameToken.Text = name
|
||||
|
||||
// named routes only have one key, the route name
|
||||
p.block.Keys = []Token{nameToken}
|
||||
p.block.IsNamedRoute = true
|
||||
|
||||
// get all the tokens from the block, including the braces
|
||||
tokens, err := p.blockTokens(true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tokens = append([]Token{nameToken}, tokens...)
|
||||
p.block.Segments = []Segment{tokens}
|
||||
return nil
|
||||
}
|
||||
|
||||
if ok, name := p.isSnippet(); ok {
|
||||
if p.definedSnippets == nil {
|
||||
p.definedSnippets = map[string][]Token{}
|
||||
|
@ -154,10 +186,18 @@ func (p *parser) begin() error {
|
|||
return p.Errf("redeclaration of previously declared snippet %s", name)
|
||||
}
|
||||
// consume all tokens til matched close brace
|
||||
tokens, err := p.snippetTokens()
|
||||
tokens, err := p.blockTokens(false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Just as we need to track which file the token comes from, we need to
|
||||
// keep track of which snippet the token comes from. This is helpful
|
||||
// in tracking import cycles across files/snippets by namespacing them.
|
||||
// Without this, we end up with false-positives in cycle-detection.
|
||||
for k, v := range tokens {
|
||||
v.snippetName = name
|
||||
tokens[k] = v
|
||||
}
|
||||
p.definedSnippets[name] = tokens
|
||||
// empty block keys so we don't save this block as a real server.
|
||||
p.block.Keys = nil
|
||||
|
@ -171,11 +211,17 @@ func (p *parser) addresses() error {
|
|||
var expectingAnother bool
|
||||
|
||||
for {
|
||||
tkn := p.Val()
|
||||
value := p.Val()
|
||||
token := p.Token()
|
||||
|
||||
// special case: import directive replaces tokens during parse-time
|
||||
if tkn == "import" && p.isNewLine() {
|
||||
err := p.doImport()
|
||||
// Reject request matchers if trying to define them globally
|
||||
if strings.HasPrefix(value, "@") {
|
||||
return p.Errf("request matchers may not be defined globally, they must be in a site block; found %s", value)
|
||||
}
|
||||
|
||||
// Special case: import directive replaces tokens during parse-time
|
||||
if value == "import" && p.isNewLine() {
|
||||
err := p.doImport(0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -183,24 +229,48 @@ func (p *parser) addresses() error {
|
|||
}
|
||||
|
||||
// Open brace definitely indicates end of addresses
|
||||
if tkn == "{" {
|
||||
if value == "{" {
|
||||
if expectingAnother {
|
||||
return p.Errf("Expected another address but had '%s' - check for extra comma", tkn)
|
||||
return p.Errf("Expected another address but had '%s' - check for extra comma", value)
|
||||
}
|
||||
// Mark this server block as being defined with braces.
|
||||
// This is used to provide a better error message when
|
||||
// the user may have tried to define two server blocks
|
||||
// without having used braces, which are required in
|
||||
// that case.
|
||||
p.block.HasBraces = true
|
||||
break
|
||||
}
|
||||
|
||||
if tkn != "" { // empty token possible if user typed ""
|
||||
// Users commonly forget to place a space between the address and the '{'
|
||||
if strings.HasSuffix(value, "{") {
|
||||
return p.Errf("Site addresses cannot end with a curly brace: '%s' - put a space between the token and the brace", value)
|
||||
}
|
||||
|
||||
if value != "" { // empty token possible if user typed ""
|
||||
// Trailing comma indicates another address will follow, which
|
||||
// may possibly be on the next line
|
||||
if tkn[len(tkn)-1] == ',' {
|
||||
tkn = tkn[:len(tkn)-1]
|
||||
if value[len(value)-1] == ',' {
|
||||
value = value[:len(value)-1]
|
||||
expectingAnother = true
|
||||
} else {
|
||||
expectingAnother = false // but we may still see another one on this line
|
||||
}
|
||||
|
||||
p.block.Keys = append(p.block.Keys, tkn)
|
||||
// If there's a comma here, it's probably because they didn't use a space
|
||||
// between their two domains, e.g. "foo.com,bar.com", which would not be
|
||||
// parsed as two separate site addresses.
|
||||
if strings.Contains(value, ",") {
|
||||
return p.Errf("Site addresses cannot contain a comma ',': '%s' - put a space after the comma to separate site addresses", value)
|
||||
}
|
||||
|
||||
// After the above, a comma surrounded by spaces would result
|
||||
// in an empty token which we should ignore
|
||||
if value != "" {
|
||||
// Add the token as a site address
|
||||
token.Text = value
|
||||
p.block.Keys = append(p.block.Keys, token)
|
||||
}
|
||||
}
|
||||
|
||||
// Advance token and possibly break out of loop or return error
|
||||
|
@ -257,7 +327,7 @@ func (p *parser) directives() error {
|
|||
|
||||
// special case: import directive replaces tokens during parse-time
|
||||
if p.Val() == "import" {
|
||||
err := p.doImport()
|
||||
err := p.doImport(1)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -283,7 +353,7 @@ func (p *parser) directives() error {
|
|||
// is on the token before where the import directive was. In
|
||||
// other words, call Next() to access the first token that was
|
||||
// imported.
|
||||
func (p *parser) doImport() error {
|
||||
func (p *parser) doImport(nesting int) error {
|
||||
// syntax checks
|
||||
if !p.NextArg() {
|
||||
return p.ArgErr()
|
||||
|
@ -292,22 +362,68 @@ func (p *parser) doImport() error {
|
|||
if importPattern == "" {
|
||||
return p.Err("Import requires a non-empty filepath")
|
||||
}
|
||||
if p.NextArg() {
|
||||
return p.Err("Import takes only one argument (glob pattern or file)")
|
||||
|
||||
// grab remaining args as placeholder replacements
|
||||
args := p.RemainingArgs()
|
||||
|
||||
// set up a replacer for non-variadic args replacement
|
||||
repl := makeArgsReplacer(args)
|
||||
|
||||
// grab all the tokens (if it exists) from within a block that follows the import
|
||||
var blockTokens []Token
|
||||
for currentNesting := p.Nesting(); p.NextBlock(currentNesting); {
|
||||
blockTokens = append(blockTokens, p.Token())
|
||||
}
|
||||
// splice out the import directive and its argument (2 tokens total)
|
||||
tokensBefore := p.tokens[:p.cursor-1]
|
||||
// initialize with size 1
|
||||
blockMapping := make(map[string][]Token, 1)
|
||||
if len(blockTokens) > 0 {
|
||||
// use such tokens to create a new dispenser, and then use it to parse each block
|
||||
bd := NewDispenser(blockTokens)
|
||||
for bd.Next() {
|
||||
// see if we can grab a key
|
||||
var currentMappingKey string
|
||||
if bd.Val() == "{" {
|
||||
return p.Err("anonymous blocks are not supported")
|
||||
}
|
||||
currentMappingKey = bd.Val()
|
||||
currentMappingTokens := []Token{}
|
||||
// read all args until end of line / {
|
||||
if bd.NextArg() {
|
||||
currentMappingTokens = append(currentMappingTokens, bd.Token())
|
||||
for bd.NextArg() {
|
||||
currentMappingTokens = append(currentMappingTokens, bd.Token())
|
||||
}
|
||||
// TODO(elee1766): we don't enter another mapping here because it's annoying to extract the { and } properly.
|
||||
// maybe someone can do that in the future
|
||||
} else {
|
||||
// attempt to enter a block and add tokens to the currentMappingTokens
|
||||
for mappingNesting := bd.Nesting(); bd.NextBlock(mappingNesting); {
|
||||
currentMappingTokens = append(currentMappingTokens, bd.Token())
|
||||
}
|
||||
}
|
||||
blockMapping[currentMappingKey] = currentMappingTokens
|
||||
}
|
||||
}
|
||||
|
||||
// splice out the import directive and its arguments
|
||||
// (2 tokens, plus the length of args)
|
||||
tokensBefore := p.tokens[:p.cursor-1-len(args)-len(blockTokens)]
|
||||
tokensAfter := p.tokens[p.cursor+1:]
|
||||
var importedTokens []Token
|
||||
var nodes []string
|
||||
|
||||
// first check snippets. That is a simple, non-recursive replacement
|
||||
if p.definedSnippets != nil && p.definedSnippets[importPattern] != nil {
|
||||
importedTokens = p.definedSnippets[importPattern]
|
||||
if len(importedTokens) > 0 {
|
||||
// just grab the first one
|
||||
nodes = append(nodes, fmt.Sprintf("%s:%s", importedTokens[0].File, importedTokens[0].snippetName))
|
||||
}
|
||||
} else {
|
||||
// make path relative to the file of the _token_ being processed rather
|
||||
// than current working directory (issue #867) and then use glob to get
|
||||
// list of matching filenames
|
||||
absFile, err := filepath.Abs(p.Dispenser.File())
|
||||
absFile, err := caddy.FastAbs(p.Dispenser.File())
|
||||
if err != nil {
|
||||
return p.Errf("Failed to get absolute path of file: %s: %v", p.Dispenser.File(), err)
|
||||
}
|
||||
|
@ -325,20 +441,32 @@ func (p *parser) doImport() error {
|
|||
return p.Errf("Glob pattern may only contain one wildcard (*), but has others: %s", globPattern)
|
||||
}
|
||||
matches, err = filepath.Glob(globPattern)
|
||||
|
||||
if err != nil {
|
||||
return p.Errf("Failed to use import pattern %s: %v", importPattern, err)
|
||||
}
|
||||
if len(matches) == 0 {
|
||||
if strings.ContainsAny(globPattern, "*?[]") {
|
||||
log.Printf("[WARNING] No files matching import glob pattern: %s", importPattern)
|
||||
caddy.Log().Warn("No files matching import glob pattern", zap.String("pattern", importPattern))
|
||||
} else {
|
||||
return p.Errf("File to import not found: %s", importPattern)
|
||||
}
|
||||
} else {
|
||||
// See issue #5295 - should skip any files that start with a . when iterating over them.
|
||||
sep := string(filepath.Separator)
|
||||
segGlobPattern := strings.Split(globPattern, sep)
|
||||
if strings.HasPrefix(segGlobPattern[len(segGlobPattern)-1], "*") {
|
||||
var tmpMatches []string
|
||||
for _, m := range matches {
|
||||
seg := strings.Split(m, sep)
|
||||
if !strings.HasPrefix(seg[len(seg)-1], ".") {
|
||||
tmpMatches = append(tmpMatches, m)
|
||||
}
|
||||
}
|
||||
matches = tmpMatches
|
||||
}
|
||||
}
|
||||
|
||||
// collect all the imported tokens
|
||||
|
||||
for _, importFile := range matches {
|
||||
newTokens, err := p.doSingleImport(importFile)
|
||||
if err != nil {
|
||||
|
@ -346,12 +474,117 @@ func (p *parser) doImport() error {
|
|||
}
|
||||
importedTokens = append(importedTokens, newTokens...)
|
||||
}
|
||||
nodes = matches
|
||||
}
|
||||
|
||||
nodeName := p.File()
|
||||
if p.Token().snippetName != "" {
|
||||
nodeName += fmt.Sprintf(":%s", p.Token().snippetName)
|
||||
}
|
||||
p.importGraph.addNode(nodeName)
|
||||
p.importGraph.addNodes(nodes)
|
||||
if err := p.importGraph.addEdges(nodeName, nodes); err != nil {
|
||||
p.importGraph.removeNodes(nodes)
|
||||
return err
|
||||
}
|
||||
|
||||
// copy the tokens so we don't overwrite p.definedSnippets
|
||||
tokensCopy := make([]Token, 0, len(importedTokens))
|
||||
|
||||
var (
|
||||
maybeSnippet bool
|
||||
maybeSnippetId bool
|
||||
index int
|
||||
)
|
||||
|
||||
// run the argument replacer on the tokens
|
||||
// golang for range slice return a copy of value
|
||||
// similarly, append also copy value
|
||||
for i, token := range importedTokens {
|
||||
// update the token's imports to refer to import directive filename, line number and snippet name if there is one
|
||||
if token.snippetName != "" {
|
||||
token.imports = append(token.imports, fmt.Sprintf("%s:%d (import %s)", p.File(), p.Line(), token.snippetName))
|
||||
} else {
|
||||
token.imports = append(token.imports, fmt.Sprintf("%s:%d (import)", p.File(), p.Line()))
|
||||
}
|
||||
|
||||
// naive way of determine snippets, as snippets definition can only follow name + block
|
||||
// format, won't check for nesting correctness or any other error, that's what parser does.
|
||||
if !maybeSnippet && nesting == 0 {
|
||||
// first of the line
|
||||
if i == 0 || isNextOnNewLine(tokensCopy[i-1], token) {
|
||||
index = 0
|
||||
} else {
|
||||
index++
|
||||
}
|
||||
|
||||
if index == 0 && len(token.Text) >= 3 && strings.HasPrefix(token.Text, "(") && strings.HasSuffix(token.Text, ")") {
|
||||
maybeSnippetId = true
|
||||
}
|
||||
}
|
||||
|
||||
switch token.Text {
|
||||
case "{":
|
||||
nesting++
|
||||
if index == 1 && maybeSnippetId && nesting == 1 {
|
||||
maybeSnippet = true
|
||||
maybeSnippetId = false
|
||||
}
|
||||
case "}":
|
||||
nesting--
|
||||
if nesting == 0 && maybeSnippet {
|
||||
maybeSnippet = false
|
||||
}
|
||||
}
|
||||
// if it is {block}, we substitute with all tokens in the block
|
||||
// if it is {blocks.*}, we substitute with the tokens in the mapping for the *
|
||||
var skip bool
|
||||
var tokensToAdd []Token
|
||||
switch {
|
||||
case token.Text == "{block}":
|
||||
tokensToAdd = blockTokens
|
||||
case strings.HasPrefix(token.Text, "{blocks.") && strings.HasSuffix(token.Text, "}"):
|
||||
// {blocks.foo.bar} will be extracted to key `foo.bar`
|
||||
blockKey := strings.TrimPrefix(strings.TrimSuffix(token.Text, "}"), "{blocks.")
|
||||
val, ok := blockMapping[blockKey]
|
||||
if ok {
|
||||
tokensToAdd = val
|
||||
}
|
||||
default:
|
||||
skip = true
|
||||
}
|
||||
if !skip {
|
||||
if len(tokensToAdd) == 0 {
|
||||
// if there is no content in the snippet block, don't do any replacement
|
||||
// this allows snippets which contained {block}/{block.*} before this change to continue functioning as normal
|
||||
tokensCopy = append(tokensCopy, token)
|
||||
} else {
|
||||
tokensCopy = append(tokensCopy, tokensToAdd...)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if maybeSnippet {
|
||||
tokensCopy = append(tokensCopy, token)
|
||||
continue
|
||||
}
|
||||
|
||||
foundVariadic, startIndex, endIndex := parseVariadic(token, len(args))
|
||||
if foundVariadic {
|
||||
for _, arg := range args[startIndex:endIndex] {
|
||||
token.Text = arg
|
||||
tokensCopy = append(tokensCopy, token)
|
||||
}
|
||||
} else {
|
||||
token.Text = repl.ReplaceKnown(token.Text, "")
|
||||
tokensCopy = append(tokensCopy, token)
|
||||
}
|
||||
}
|
||||
|
||||
// splice the imported tokens in the place of the import statement
|
||||
// and rewind cursor so Next() will land on first imported token
|
||||
p.tokens = append(tokensBefore, append(importedTokens, tokensAfter...)...)
|
||||
p.cursor--
|
||||
p.tokens = append(tokensBefore, append(tokensCopy, tokensAfter...)...)
|
||||
p.cursor -= len(args) + len(blockTokens) + 1
|
||||
|
||||
return nil
|
||||
}
|
||||
|
@ -371,11 +604,17 @@ func (p *parser) doSingleImport(importFile string) ([]Token, error) {
|
|||
return nil, p.Errf("Could not import %s: is a directory", importFile)
|
||||
}
|
||||
|
||||
input, err := ioutil.ReadAll(file)
|
||||
input, err := io.ReadAll(file)
|
||||
if err != nil {
|
||||
return nil, p.Errf("Could not read imported file %s: %v", importFile, err)
|
||||
}
|
||||
|
||||
// only warning in case of empty files
|
||||
if len(input) == 0 || len(strings.TrimSpace(string(input))) == 0 {
|
||||
caddy.Log().Warn("Import file is empty", zap.String("file", importFile))
|
||||
return []Token{}, nil
|
||||
}
|
||||
|
||||
importedTokens, err := allTokens(importFile, input)
|
||||
if err != nil {
|
||||
return nil, p.Errf("Could not read tokens while importing %s: %v", importFile, err)
|
||||
|
@ -383,7 +622,7 @@ func (p *parser) doSingleImport(importFile string) ([]Token, error) {
|
|||
|
||||
// Tack the file path onto these tokens so errors show the imported file's name
|
||||
// (we use full, absolute path to avoid bugs: issue #1892)
|
||||
filename, err := filepath.Abs(importFile)
|
||||
filename, err := caddy.FastAbs(importFile)
|
||||
if err != nil {
|
||||
return nil, p.Errf("Failed to get absolute path of file: %s: %v", importFile, err)
|
||||
}
|
||||
|
@ -401,7 +640,6 @@ func (p *parser) doSingleImport(importFile string) ([]Token, error) {
|
|||
// are loaded into the current server block for later use
|
||||
// by directive setup functions.
|
||||
func (p *parser) directive() error {
|
||||
|
||||
// a segment is a list of tokens associated with this directive
|
||||
var segment Segment
|
||||
|
||||
|
@ -411,6 +649,16 @@ func (p *parser) directive() error {
|
|||
for p.Next() {
|
||||
if p.Val() == "{" {
|
||||
p.nesting++
|
||||
if !p.isNextOnNewLine() && p.Token().wasQuoted == 0 {
|
||||
return p.Err("Unexpected next token after '{' on same line")
|
||||
}
|
||||
if p.isNewLine() {
|
||||
return p.Err("Unexpected '{' on a new line; did you mean to place the '{' on the previous line?")
|
||||
}
|
||||
} else if p.Val() == "{}" {
|
||||
if p.isNextOnNewLine() && p.Token().wasQuoted == 0 {
|
||||
return p.Err("Unexpected '{}' at end of line")
|
||||
}
|
||||
} else if p.isNewLine() && p.nesting == 0 {
|
||||
p.cursor-- // read too far
|
||||
break
|
||||
|
@ -419,7 +667,7 @@ func (p *parser) directive() error {
|
|||
} else if p.Val() == "}" && p.nesting == 0 {
|
||||
return p.Err("Unexpected '}' because no matching opening brace")
|
||||
} else if p.Val() == "import" && p.isNewLine() {
|
||||
if err := p.doImport(); err != nil {
|
||||
if err := p.doImport(1); err != nil {
|
||||
return err
|
||||
}
|
||||
p.cursor-- // cursor is advanced when we continue, so roll back one more
|
||||
|
@ -460,28 +708,43 @@ func (p *parser) closeCurlyBrace() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (p *parser) isNamedRoute() (bool, string) {
|
||||
keys := p.block.Keys
|
||||
// A named route block is a single key with parens, prefixed with &.
|
||||
if len(keys) == 1 && strings.HasPrefix(keys[0].Text, "&(") && strings.HasSuffix(keys[0].Text, ")") {
|
||||
return true, strings.TrimSuffix(keys[0].Text[2:], ")")
|
||||
}
|
||||
return false, ""
|
||||
}
|
||||
|
||||
func (p *parser) isSnippet() (bool, string) {
|
||||
keys := p.block.Keys
|
||||
// A snippet block is a single key with parens. Nothing else qualifies.
|
||||
if len(keys) == 1 && strings.HasPrefix(keys[0], "(") && strings.HasSuffix(keys[0], ")") {
|
||||
return true, strings.TrimSuffix(keys[0][1:], ")")
|
||||
if len(keys) == 1 && strings.HasPrefix(keys[0].Text, "(") && strings.HasSuffix(keys[0].Text, ")") {
|
||||
return true, strings.TrimSuffix(keys[0].Text[1:], ")")
|
||||
}
|
||||
return false, ""
|
||||
}
|
||||
|
||||
// read and store everything in a block for later replay.
|
||||
func (p *parser) snippetTokens() ([]Token, error) {
|
||||
// snippet must have curlies.
|
||||
func (p *parser) blockTokens(retainCurlies bool) ([]Token, error) {
|
||||
// block must have curlies.
|
||||
err := p.openCurlyBrace()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
nesting := 1 // count our own nesting in snippets
|
||||
nesting := 1 // count our own nesting
|
||||
tokens := []Token{}
|
||||
if retainCurlies {
|
||||
tokens = append(tokens, p.Token())
|
||||
}
|
||||
for p.Next() {
|
||||
if p.Val() == "}" {
|
||||
nesting--
|
||||
if nesting == 0 {
|
||||
if retainCurlies {
|
||||
tokens = append(tokens, p.Token())
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
@ -501,8 +764,18 @@ func (p *parser) snippetTokens() ([]Token, error) {
|
|||
// head of the server block with tokens, which are
|
||||
// grouped by segments.
|
||||
type ServerBlock struct {
|
||||
Keys []string
|
||||
Segments []Segment
|
||||
HasBraces bool
|
||||
Keys []Token
|
||||
Segments []Segment
|
||||
IsNamedRoute bool
|
||||
}
|
||||
|
||||
func (sb ServerBlock) GetKeysText() []string {
|
||||
res := []string{}
|
||||
for _, k := range sb.Keys {
|
||||
res = append(res, k.Text)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// DispenseDirective returns a dispenser that contains
|
||||
|
@ -533,4 +806,7 @@ func (s Segment) Directive() string {
|
|||
|
||||
// spanOpen and spanClose are used to bound spans that
|
||||
// contain the name of an environment variable.
|
||||
var spanOpen, spanClose = []byte{'{', '$'}, []byte{'}'}
|
||||
var (
|
||||
spanOpen, spanClose = []byte{'{', '$'}, []byte{'}'}
|
||||
envVarDefaultDelimiter = ":"
|
||||
)
|
||||
|
|
295
caddyconfig/caddyfile/parse_test.go
Executable file → Normal file
295
caddyconfig/caddyfile/parse_test.go
Executable file → Normal file
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2015 Light Code Labs, LLC
|
||||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
|
@ -16,17 +16,101 @@ package caddyfile
|
|||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestParseVariadic(t *testing.T) {
|
||||
args := make([]string, 10)
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
result bool
|
||||
}{
|
||||
{
|
||||
input: "",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args[1",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "1]}",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args[:]}aaaaa",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "aaaaa{args[:]}",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args.}",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args.1}",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args[]}",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args[:]}",
|
||||
result: true,
|
||||
},
|
||||
{
|
||||
input: "{args[:]}",
|
||||
result: true,
|
||||
},
|
||||
{
|
||||
input: "{args[0:]}",
|
||||
result: true,
|
||||
},
|
||||
{
|
||||
input: "{args[:0]}",
|
||||
result: true,
|
||||
},
|
||||
{
|
||||
input: "{args[-1:]}",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args[:11]}",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args[10:0]}",
|
||||
result: false,
|
||||
},
|
||||
{
|
||||
input: "{args[0:10]}",
|
||||
result: true,
|
||||
},
|
||||
{
|
||||
input: "{args[0]}:{args[1]}:{args[2]}",
|
||||
result: false,
|
||||
},
|
||||
} {
|
||||
token := Token{
|
||||
File: "test",
|
||||
Line: 1,
|
||||
Text: tc.input,
|
||||
}
|
||||
if v, _, _ := parseVariadic(token, len(args)); v != tc.result {
|
||||
t.Errorf("Test %d error expectation failed Expected: %t, got %t", i, tc.result, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestAllTokens(t *testing.T) {
|
||||
input := []byte("a b c\nd e")
|
||||
expected := []string{"a", "b", "c", "d", "e"}
|
||||
tokens, err := allTokens("TestAllTokens", input)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Expected no error, got %v", err)
|
||||
}
|
||||
|
@ -64,10 +148,11 @@ func TestParseOneAndImport(t *testing.T) {
|
|||
"localhost",
|
||||
}, []int{1}},
|
||||
|
||||
{`localhost:1234
|
||||
{
|
||||
`localhost:1234
|
||||
dir1 foo bar`, false, []string{
|
||||
"localhost:1234",
|
||||
}, []int{3},
|
||||
"localhost:1234",
|
||||
}, []int{3},
|
||||
},
|
||||
|
||||
{`localhost {
|
||||
|
@ -160,6 +245,10 @@ func TestParseOneAndImport(t *testing.T) {
|
|||
"localhost",
|
||||
}, []int{}},
|
||||
|
||||
{`localhost{
|
||||
dir1
|
||||
}`, true, []string{}, []int{}},
|
||||
|
||||
{`localhost
|
||||
dir1 {
|
||||
nested {
|
||||
|
@ -182,14 +271,56 @@ func TestParseOneAndImport(t *testing.T) {
|
|||
"host1",
|
||||
}, []int{1, 2}},
|
||||
|
||||
{`import testdata/import_test1.txt testdata/import_test2.txt`, true, []string{}, []int{}},
|
||||
|
||||
{`import testdata/not_found.txt`, true, []string{}, []int{}},
|
||||
|
||||
// empty file should just log a warning, and result in no tokens
|
||||
{`import testdata/empty.txt`, false, []string{}, []int{}},
|
||||
|
||||
{`import testdata/only_white_space.txt`, false, []string{}, []int{}},
|
||||
|
||||
// import path/to/dir/* should skip any files that start with a . when iterating over them.
|
||||
{`localhost
|
||||
dir1 arg1
|
||||
import testdata/glob/*`, false, []string{
|
||||
"localhost",
|
||||
}, []int{2, 3, 1}},
|
||||
|
||||
// import path/to/dir/.* should continue to read all dotfiles in a dir.
|
||||
{`import testdata/glob/.*`, false, []string{
|
||||
"host1",
|
||||
}, []int{1, 2}},
|
||||
|
||||
{`""`, false, []string{}, []int{}},
|
||||
|
||||
{``, false, []string{}, []int{}},
|
||||
|
||||
// Unexpected next token after '{' on same line
|
||||
{`localhost
|
||||
dir1 { a b }`, true, []string{"localhost"}, []int{}},
|
||||
|
||||
// Unexpected '{' on a new line
|
||||
{`localhost
|
||||
dir1
|
||||
{
|
||||
a b
|
||||
}`, true, []string{"localhost"}, []int{}},
|
||||
|
||||
// Workaround with quotes
|
||||
{`localhost
|
||||
dir1 "{" a b "}"`, false, []string{"localhost"}, []int{5}},
|
||||
|
||||
// Unexpected '{}' at end of line
|
||||
{`localhost
|
||||
dir1 {}`, true, []string{"localhost"}, []int{}},
|
||||
// Workaround with quotes
|
||||
{`localhost
|
||||
dir1 "{}"`, false, []string{"localhost"}, []int{2}},
|
||||
|
||||
// import with args
|
||||
{`import testdata/import_args0.txt a`, false, []string{"a"}, []int{}},
|
||||
{`import testdata/import_args1.txt a b`, false, []string{"a", "b"}, []int{}},
|
||||
{`import testdata/import_args*.txt a b`, false, []string{"a"}, []int{2}},
|
||||
|
||||
// test cases found by fuzzing!
|
||||
{`import }{$"`, true, []string{}, []int{}},
|
||||
{`import /*/*.txt`, true, []string{}, []int{}},
|
||||
|
@ -210,12 +341,13 @@ func TestParseOneAndImport(t *testing.T) {
|
|||
t.Errorf("Test %d: Expected no error, but got: %v", i, err)
|
||||
}
|
||||
|
||||
// t.Logf("%+v\n", result)
|
||||
if len(result.Keys) != len(test.keys) {
|
||||
t.Errorf("Test %d: Expected %d keys, got %d",
|
||||
i, len(test.keys), len(result.Keys))
|
||||
continue
|
||||
}
|
||||
for j, addr := range result.Keys {
|
||||
for j, addr := range result.GetKeysText() {
|
||||
if addr != test.keys[j] {
|
||||
t.Errorf("Test %d, key %d: Expected '%s', but was '%s'",
|
||||
i, j, test.keys[j], addr)
|
||||
|
@ -247,8 +379,9 @@ func TestRecursiveImport(t *testing.T) {
|
|||
}
|
||||
|
||||
isExpected := func(got ServerBlock) bool {
|
||||
if len(got.Keys) != 1 || got.Keys[0] != "localhost" {
|
||||
t.Errorf("got keys unexpected: expect localhost, got %v", got.Keys)
|
||||
textKeys := got.GetKeysText()
|
||||
if len(textKeys) != 1 || textKeys[0] != "localhost" {
|
||||
t.Errorf("got keys unexpected: expect localhost, got %v", textKeys)
|
||||
return false
|
||||
}
|
||||
if len(got.Segments) != 2 {
|
||||
|
@ -272,16 +405,16 @@ func TestRecursiveImport(t *testing.T) {
|
|||
}
|
||||
|
||||
// test relative recursive import
|
||||
err = ioutil.WriteFile(recursiveFile1, []byte(
|
||||
err = os.WriteFile(recursiveFile1, []byte(
|
||||
`localhost
|
||||
dir1
|
||||
import recursive_import_test2`), 0644)
|
||||
import recursive_import_test2`), 0o644)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.Remove(recursiveFile1)
|
||||
|
||||
err = ioutil.WriteFile(recursiveFile2, []byte("dir2 1"), 0644)
|
||||
err = os.WriteFile(recursiveFile2, []byte("dir2 1"), 0o644)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -306,10 +439,10 @@ func TestRecursiveImport(t *testing.T) {
|
|||
}
|
||||
|
||||
// test absolute recursive import
|
||||
err = ioutil.WriteFile(recursiveFile1, []byte(
|
||||
err = os.WriteFile(recursiveFile1, []byte(
|
||||
`localhost
|
||||
dir1
|
||||
import `+recursiveFile2), 0644)
|
||||
import `+recursiveFile2), 0o644)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -342,8 +475,9 @@ func TestDirectiveImport(t *testing.T) {
|
|||
}
|
||||
|
||||
isExpected := func(got ServerBlock) bool {
|
||||
if len(got.Keys) != 1 || got.Keys[0] != "localhost" {
|
||||
t.Errorf("got keys unexpected: expect localhost, got %v", got.Keys)
|
||||
textKeys := got.GetKeysText()
|
||||
if len(textKeys) != 1 || textKeys[0] != "localhost" {
|
||||
t.Errorf("got keys unexpected: expect localhost, got %v", textKeys)
|
||||
return false
|
||||
}
|
||||
if len(got.Segments) != 2 {
|
||||
|
@ -362,8 +496,8 @@ func TestDirectiveImport(t *testing.T) {
|
|||
t.Fatal(err)
|
||||
}
|
||||
|
||||
err = ioutil.WriteFile(directiveFile, []byte(`prop1 1
|
||||
prop2 2`), 0644)
|
||||
err = os.WriteFile(directiveFile, []byte(`prop1 1
|
||||
prop2 2`), 0o644)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -421,6 +555,10 @@ func TestParseAll(t *testing.T) {
|
|||
{"localhost:1234", "http://host2"},
|
||||
}},
|
||||
|
||||
{`foo.example.com , example.com`, false, [][]string{
|
||||
{"foo.example.com", "example.com"},
|
||||
}},
|
||||
|
||||
{`localhost:1234, http://host2,`, true, [][]string{}},
|
||||
|
||||
{`http://host1.com, http://host2.com {
|
||||
|
@ -440,6 +578,28 @@ func TestParseAll(t *testing.T) {
|
|||
|
||||
{`import notfound/*`, false, [][]string{}}, // glob needn't error with no matches
|
||||
{`import notfound/file.conf`, true, [][]string{}}, // but a specific file should
|
||||
|
||||
// recursive self-import
|
||||
{`import testdata/import_recursive0.txt`, true, [][]string{}},
|
||||
{`import testdata/import_recursive3.txt
|
||||
import testdata/import_recursive1.txt`, true, [][]string{}},
|
||||
|
||||
// cyclic imports
|
||||
{`(A) {
|
||||
import A
|
||||
}
|
||||
:80
|
||||
import A
|
||||
`, true, [][]string{}},
|
||||
{`(A) {
|
||||
import B
|
||||
}
|
||||
(B) {
|
||||
import A
|
||||
}
|
||||
:80
|
||||
import A
|
||||
`, true, [][]string{}},
|
||||
} {
|
||||
p := testParser(test.input)
|
||||
blocks, err := p.parseAll()
|
||||
|
@ -458,11 +618,11 @@ func TestParseAll(t *testing.T) {
|
|||
}
|
||||
for j, block := range blocks {
|
||||
if len(block.Keys) != len(test.keys[j]) {
|
||||
t.Errorf("Test %d: Expected %d keys in block %d, got %d",
|
||||
i, len(test.keys[j]), j, len(block.Keys))
|
||||
t.Errorf("Test %d: Expected %d keys in block %d, got %d: %v",
|
||||
i, len(test.keys[j]), j, len(block.Keys), block.Keys)
|
||||
continue
|
||||
}
|
||||
for k, addr := range block.Keys {
|
||||
for k, addr := range block.GetKeysText() {
|
||||
if addr != test.keys[j][k] {
|
||||
t.Errorf("Test %d, block %d, key %d: Expected '%s', but got '%s'",
|
||||
i, j, k, test.keys[j][k], addr)
|
||||
|
@ -474,6 +634,7 @@ func TestParseAll(t *testing.T) {
|
|||
|
||||
func TestEnvironmentReplacement(t *testing.T) {
|
||||
os.Setenv("FOOBAR", "foobar")
|
||||
os.Setenv("CHAINED", "$FOOBAR")
|
||||
|
||||
for i, test := range []struct {
|
||||
input string
|
||||
|
@ -519,6 +680,22 @@ func TestEnvironmentReplacement(t *testing.T) {
|
|||
input: "{$FOOBAR}{$FOOBAR}",
|
||||
expect: "foobarfoobar",
|
||||
},
|
||||
{
|
||||
input: "{$CHAINED}",
|
||||
expect: "$FOOBAR", // should not chain env expands
|
||||
},
|
||||
{
|
||||
input: "{$FOO:default}",
|
||||
expect: "default",
|
||||
},
|
||||
{
|
||||
input: "foo{$BAR:bar}baz",
|
||||
expect: "foobarbaz",
|
||||
},
|
||||
{
|
||||
input: "foo{$BAR:$FOOBAR}baz",
|
||||
expect: "foo$FOOBARbaz", // should not chain env expands
|
||||
},
|
||||
{
|
||||
input: "{$FOOBAR",
|
||||
expect: "{$FOOBAR",
|
||||
|
@ -544,16 +721,43 @@ func TestEnvironmentReplacement(t *testing.T) {
|
|||
expect: "}{$",
|
||||
},
|
||||
} {
|
||||
actual, err := replaceEnvVars([]byte(test.input))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
actual := replaceEnvVars([]byte(test.input))
|
||||
if !bytes.Equal(actual, []byte(test.expect)) {
|
||||
t.Errorf("Test %d: Expected: '%s' but got '%s'", i, test.expect, actual)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestImportReplacementInJSONWithBrace(t *testing.T) {
|
||||
for i, test := range []struct {
|
||||
args []string
|
||||
input string
|
||||
expect string
|
||||
}{
|
||||
{
|
||||
args: []string{"123"},
|
||||
input: "{args[0]}",
|
||||
expect: "123",
|
||||
},
|
||||
{
|
||||
args: []string{"123"},
|
||||
input: `{"key":"{args[0]}"}`,
|
||||
expect: `{"key":"123"}`,
|
||||
},
|
||||
{
|
||||
args: []string{"123", "123"},
|
||||
input: `{"key":[{args[0]},{args[1]}]}`,
|
||||
expect: `{"key":[123,123]}`,
|
||||
},
|
||||
} {
|
||||
repl := makeArgsReplacer(test.args)
|
||||
actual := repl.ReplaceKnown(test.input, "")
|
||||
if actual != test.expect {
|
||||
t.Errorf("Test %d: Expected: '%s' but got '%s'", i, test.expect, actual)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSnippets(t *testing.T) {
|
||||
p := testParser(`
|
||||
(common) {
|
||||
|
@ -571,7 +775,7 @@ func TestSnippets(t *testing.T) {
|
|||
if len(blocks) != 1 {
|
||||
t.Fatalf("Expect exactly one server block. Got %d.", len(blocks))
|
||||
}
|
||||
if actual, expected := blocks[0].Keys[0], "http://example.com"; expected != actual {
|
||||
if actual, expected := blocks[0].GetKeysText()[0], "http://example.com"; expected != actual {
|
||||
t.Errorf("Expected server name to be '%s' but was '%s'", expected, actual)
|
||||
}
|
||||
if len(blocks[0].Segments) != 2 {
|
||||
|
@ -586,7 +790,7 @@ func TestSnippets(t *testing.T) {
|
|||
}
|
||||
|
||||
func writeStringToTempFileOrDie(t *testing.T, str string) (pathToFile string) {
|
||||
file, err := ioutil.TempFile("", t.Name())
|
||||
file, err := os.CreateTemp("", t.Name())
|
||||
if err != nil {
|
||||
panic(err) // get a stack trace so we know where this was called from.
|
||||
}
|
||||
|
@ -603,7 +807,7 @@ func TestImportedFilesIgnoreNonDirectiveImportTokens(t *testing.T) {
|
|||
fileName := writeStringToTempFileOrDie(t, `
|
||||
http://example.com {
|
||||
# This isn't an import directive, it's just an arg with value 'import'
|
||||
basicauth / import password
|
||||
basic_auth / import password
|
||||
}
|
||||
`)
|
||||
// Parse the root file that imports the other one.
|
||||
|
@ -614,12 +818,12 @@ func TestImportedFilesIgnoreNonDirectiveImportTokens(t *testing.T) {
|
|||
}
|
||||
auth := blocks[0].Segments[0]
|
||||
line := auth[0].Text + " " + auth[1].Text + " " + auth[2].Text + " " + auth[3].Text
|
||||
if line != "basicauth / import password" {
|
||||
if line != "basic_auth / import password" {
|
||||
// Previously, it would be changed to:
|
||||
// basicauth / import /path/to/test/dir/password
|
||||
// basic_auth / import /path/to/test/dir/password
|
||||
// referencing a file that (probably) doesn't exist and changing the
|
||||
// password!
|
||||
t.Errorf("Expected basicauth tokens to be 'basicauth / import password' but got %#q", line)
|
||||
t.Errorf("Expected basic_auth tokens to be 'basic_auth / import password' but got %#q", line)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -646,7 +850,7 @@ func TestSnippetAcrossMultipleFiles(t *testing.T) {
|
|||
if len(blocks) != 1 {
|
||||
t.Fatalf("Expect exactly one server block. Got %d.", len(blocks))
|
||||
}
|
||||
if actual, expected := blocks[0].Keys[0], "http://example.com"; expected != actual {
|
||||
if actual, expected := blocks[0].GetKeysText()[0], "http://example.com"; expected != actual {
|
||||
t.Errorf("Expected server name to be '%s' but was '%s'", expected, actual)
|
||||
}
|
||||
if len(blocks[0].Segments) != 1 {
|
||||
|
@ -657,6 +861,29 @@ func TestSnippetAcrossMultipleFiles(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRejectsGlobalMatcher(t *testing.T) {
|
||||
p := testParser(`
|
||||
@rejected path /foo
|
||||
|
||||
(common) {
|
||||
gzip foo
|
||||
errors stderr
|
||||
}
|
||||
|
||||
http://example.com {
|
||||
import common
|
||||
}
|
||||
`)
|
||||
_, err := p.parseAll()
|
||||
if err == nil {
|
||||
t.Fatal("Expected an error, but got nil")
|
||||
}
|
||||
expected := "request matchers may not be defined globally, they must be in a site block; found @rejected, at Testfile:2"
|
||||
if err.Error() != expected {
|
||||
t.Errorf("Expected error to be '%s' but got '%v'", expected, err)
|
||||
}
|
||||
}
|
||||
|
||||
func testParser(input string) parser {
|
||||
return parser{Dispenser: NewTestDispenser(input)}
|
||||
}
|
||||
|
|
0
caddyconfig/caddyfile/testdata/empty.txt
vendored
Normal file
0
caddyconfig/caddyfile/testdata/empty.txt
vendored
Normal file
4
caddyconfig/caddyfile/testdata/glob/.dotfile.txt
vendored
Normal file
4
caddyconfig/caddyfile/testdata/glob/.dotfile.txt
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
host1 {
|
||||
dir1
|
||||
dir2 arg1
|
||||
}
|
2
caddyconfig/caddyfile/testdata/glob/import_test1.txt
vendored
Normal file
2
caddyconfig/caddyfile/testdata/glob/import_test1.txt
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
dir2 arg1 arg2
|
||||
dir3
|
1
caddyconfig/caddyfile/testdata/import_args0.txt
vendored
Normal file
1
caddyconfig/caddyfile/testdata/import_args0.txt
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
{args[0]}
|
1
caddyconfig/caddyfile/testdata/import_args1.txt
vendored
Normal file
1
caddyconfig/caddyfile/testdata/import_args1.txt
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
{args[0]} {args[1]}
|
0
caddyconfig/caddyfile/testdata/import_glob0.txt
vendored
Executable file → Normal file
0
caddyconfig/caddyfile/testdata/import_glob0.txt
vendored
Executable file → Normal file
0
caddyconfig/caddyfile/testdata/import_glob1.txt
vendored
Executable file → Normal file
0
caddyconfig/caddyfile/testdata/import_glob1.txt
vendored
Executable file → Normal file
0
caddyconfig/caddyfile/testdata/import_glob2.txt
vendored
Executable file → Normal file
0
caddyconfig/caddyfile/testdata/import_glob2.txt
vendored
Executable file → Normal file
1
caddyconfig/caddyfile/testdata/import_recursive0.txt
vendored
Normal file
1
caddyconfig/caddyfile/testdata/import_recursive0.txt
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
import import_recursive0.txt
|
1
caddyconfig/caddyfile/testdata/import_recursive1.txt
vendored
Normal file
1
caddyconfig/caddyfile/testdata/import_recursive1.txt
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
import import_recursive2.txt
|
1
caddyconfig/caddyfile/testdata/import_recursive2.txt
vendored
Normal file
1
caddyconfig/caddyfile/testdata/import_recursive2.txt
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
import import_recursive3.txt
|
1
caddyconfig/caddyfile/testdata/import_recursive3.txt
vendored
Normal file
1
caddyconfig/caddyfile/testdata/import_recursive3.txt
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
import import_recursive1.txt
|
0
caddyconfig/caddyfile/testdata/import_test1.txt
vendored
Executable file → Normal file
0
caddyconfig/caddyfile/testdata/import_test1.txt
vendored
Executable file → Normal file
0
caddyconfig/caddyfile/testdata/import_test2.txt
vendored
Executable file → Normal file
0
caddyconfig/caddyfile/testdata/import_test2.txt
vendored
Executable file → Normal file
7
caddyconfig/caddyfile/testdata/only_white_space.txt
vendored
Normal file
7
caddyconfig/caddyfile/testdata/only_white_space.txt
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -24,7 +24,7 @@ import (
|
|||
// Adapter is a type which can adapt a configuration to Caddy JSON.
|
||||
// It returns the results and any warnings, or an error.
|
||||
type Adapter interface {
|
||||
Adapt(body []byte, options map[string]interface{}) ([]byte, []Warning, error)
|
||||
Adapt(body []byte, options map[string]any) ([]byte, []Warning, error)
|
||||
}
|
||||
|
||||
// Warning represents a warning or notice related to conversion.
|
||||
|
@ -35,12 +35,20 @@ type Warning struct {
|
|||
Message string `json:"message,omitempty"`
|
||||
}
|
||||
|
||||
func (w Warning) String() string {
|
||||
var directive string
|
||||
if w.Directive != "" {
|
||||
directive = fmt.Sprintf(" (%s)", w.Directive)
|
||||
}
|
||||
return fmt.Sprintf("%s:%d%s: %s", w.File, w.Line, directive, w.Message)
|
||||
}
|
||||
|
||||
// JSON encodes val as JSON, returning it as a json.RawMessage. Any
|
||||
// marshaling errors (which are highly unlikely with correct code)
|
||||
// are converted to warnings. This is convenient when filling config
|
||||
// structs that require a json.RawMessage, without having to worry
|
||||
// about errors.
|
||||
func JSON(val interface{}, warnings *[]Warning) json.RawMessage {
|
||||
func JSON(val any, warnings *[]Warning) json.RawMessage {
|
||||
b, err := json.Marshal(val)
|
||||
if err != nil {
|
||||
if warnings != nil {
|
||||
|
@ -51,15 +59,14 @@ func JSON(val interface{}, warnings *[]Warning) json.RawMessage {
|
|||
return b
|
||||
}
|
||||
|
||||
// JSONModuleObject is like JSON, except it marshals val into a JSON object
|
||||
// and then adds a key to that object named fieldName with the value fieldVal.
|
||||
// This is useful for JSON-encoding module values where the module name has to
|
||||
// be described within the object by a certain key; for example,
|
||||
// "responder": "file_server" for a file server HTTP responder. The val must
|
||||
// encode into a map[string]interface{} (i.e. it must be a struct or map),
|
||||
// and any errors are converted into warnings, so this can be conveniently
|
||||
// used when filling a struct. For correct code, there should be no errors.
|
||||
func JSONModuleObject(val interface{}, fieldName, fieldVal string, warnings *[]Warning) json.RawMessage {
|
||||
// JSONModuleObject is like JSON(), except it marshals val into a JSON object
|
||||
// with an added key named fieldName with the value fieldVal. This is useful
|
||||
// for encoding module values where the module name has to be described within
|
||||
// the object by a certain key; for example, `"handler": "file_server"` for a
|
||||
// file server HTTP handler (fieldName="handler" and fieldVal="file_server").
|
||||
// The val parameter must encode into a map[string]any (i.e. it must be
|
||||
// a struct or map). Any errors are converted into warnings.
|
||||
func JSONModuleObject(val any, fieldName, fieldVal string, warnings *[]Warning) json.RawMessage {
|
||||
// encode to a JSON object first
|
||||
enc, err := json.Marshal(val)
|
||||
if err != nil {
|
||||
|
@ -70,7 +77,7 @@ func JSONModuleObject(val interface{}, fieldName, fieldVal string, warnings *[]W
|
|||
}
|
||||
|
||||
// then decode the object
|
||||
var tmp map[string]interface{}
|
||||
var tmp map[string]any
|
||||
err = json.Unmarshal(enc, &tmp)
|
||||
if err != nil {
|
||||
if warnings != nil {
|
||||
|
@ -94,12 +101,6 @@ func JSONModuleObject(val interface{}, fieldName, fieldVal string, warnings *[]W
|
|||
return result
|
||||
}
|
||||
|
||||
// JSONIndent is used to JSON-marshal the final resulting Caddy
|
||||
// configuration in a consistent, human-readable way.
|
||||
func JSONIndent(val interface{}) ([]byte, error) {
|
||||
return json.MarshalIndent(val, "", "\t")
|
||||
}
|
||||
|
||||
// RegisterAdapter registers a config adapter with the given name.
|
||||
// This should usually be done at init-time. It panics if the
|
||||
// adapter cannot be registered successfully.
|
||||
|
|
|
@ -17,29 +17,32 @@ package httpcaddyfile
|
|||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"net/netip"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"github.com/caddyserver/certmagic"
|
||||
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
|
||||
"github.com/caddyserver/caddy/v2/modules/caddyhttp"
|
||||
"github.com/caddyserver/certmagic"
|
||||
)
|
||||
|
||||
// mapAddressToServerBlocks returns a map of listener address to list of server
|
||||
// mapAddressToProtocolToServerBlocks returns a map of listener address to list of server
|
||||
// blocks that will be served on that address. To do this, each server block is
|
||||
// expanded so that each one is considered individually, although keys of a
|
||||
// server block that share the same address stay grouped together so the config
|
||||
// isn't repeated unnecessarily. For example, this Caddyfile:
|
||||
//
|
||||
// example.com {
|
||||
// bind 127.0.0.1
|
||||
// }
|
||||
// www.example.com, example.net/path, localhost:9999 {
|
||||
// bind 127.0.0.1 1.2.3.4
|
||||
// }
|
||||
// example.com {
|
||||
// bind 127.0.0.1
|
||||
// }
|
||||
// www.example.com, example.net/path, localhost:9999 {
|
||||
// bind 127.0.0.1 1.2.3.4
|
||||
// }
|
||||
//
|
||||
// has two server blocks to start with. But expressed in this Caddyfile are
|
||||
// actually 4 listener addresses: 127.0.0.1:443, 1.2.3.4:443, 127.0.0.1:9999,
|
||||
|
@ -74,9 +77,15 @@ import (
|
|||
// repetition may be undesirable, so call consolidateAddrMappings() to map
|
||||
// multiple addresses to the same lists of server blocks (a many:many mapping).
|
||||
// (Doing this is essentially a map-reduce technique.)
|
||||
func (st *ServerType) mapAddressToServerBlocks(originalServerBlocks []serverBlock,
|
||||
options map[string]interface{}) (map[string][]serverBlock, error) {
|
||||
sbmap := make(map[string][]serverBlock)
|
||||
func (st *ServerType) mapAddressToProtocolToServerBlocks(originalServerBlocks []serverBlock,
|
||||
options map[string]any,
|
||||
) (map[string]map[string][]serverBlock, error) {
|
||||
addrToProtocolToServerBlocks := map[string]map[string][]serverBlock{}
|
||||
|
||||
type keyWithParsedKey struct {
|
||||
key caddyfile.Token
|
||||
parsedKey Address
|
||||
}
|
||||
|
||||
for i, sblock := range originalServerBlocks {
|
||||
// within a server block, we need to map all the listener addresses
|
||||
|
@ -84,95 +93,193 @@ func (st *ServerType) mapAddressToServerBlocks(originalServerBlocks []serverBloc
|
|||
// will be served by them; this has the effect of treating each
|
||||
// key of a server block as its own, but without having to repeat its
|
||||
// contents in cases where multiple keys really can be served together
|
||||
addrToKeys := make(map[string][]string)
|
||||
addrToProtocolToKeyWithParsedKeys := map[string]map[string][]keyWithParsedKey{}
|
||||
for j, key := range sblock.block.Keys {
|
||||
parsedKey, err := ParseAddress(key.Text)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing key: %v", err)
|
||||
}
|
||||
parsedKey = parsedKey.Normalize()
|
||||
|
||||
// a key can have multiple listener addresses if there are multiple
|
||||
// arguments to the 'bind' directive (although they will all have
|
||||
// the same port, since the port is defined by the key or is implicit
|
||||
// through automatic HTTPS)
|
||||
addrs, err := st.listenerAddrsForServerBlockKey(sblock, key, options)
|
||||
listeners, err := st.listenersForServerBlockAddress(sblock, parsedKey, options)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("server block %d, key %d (%s): determining listener address: %v", i, j, key, err)
|
||||
return nil, fmt.Errorf("server block %d, key %d (%s): determining listener address: %v", i, j, key.Text, err)
|
||||
}
|
||||
|
||||
// associate this key with each listener address it is served on
|
||||
for _, addr := range addrs {
|
||||
addrToKeys[addr] = append(addrToKeys[addr], key)
|
||||
// associate this key with its protocols and each listener address served with them
|
||||
kwpk := keyWithParsedKey{key, parsedKey}
|
||||
for addr, protocols := range listeners {
|
||||
protocolToKeyWithParsedKeys, ok := addrToProtocolToKeyWithParsedKeys[addr]
|
||||
if !ok {
|
||||
protocolToKeyWithParsedKeys = map[string][]keyWithParsedKey{}
|
||||
addrToProtocolToKeyWithParsedKeys[addr] = protocolToKeyWithParsedKeys
|
||||
}
|
||||
|
||||
// an empty protocol indicates the default, a nil or empty value in the ListenProtocols array
|
||||
if len(protocols) == 0 {
|
||||
protocols[""] = struct{}{}
|
||||
}
|
||||
for prot := range protocols {
|
||||
protocolToKeyWithParsedKeys[prot] = append(
|
||||
protocolToKeyWithParsedKeys[prot],
|
||||
kwpk)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// make a slice of the map keys so we can iterate in sorted order
|
||||
addrs := make([]string, 0, len(addrToProtocolToKeyWithParsedKeys))
|
||||
for addr := range addrToProtocolToKeyWithParsedKeys {
|
||||
addrs = append(addrs, addr)
|
||||
}
|
||||
sort.Strings(addrs)
|
||||
|
||||
// now that we know which addresses serve which keys of this
|
||||
// server block, we iterate that mapping and create a list of
|
||||
// new server blocks for each address where the keys of the
|
||||
// server block are only the ones which use the address; but
|
||||
// the contents (tokens) are of course the same
|
||||
for addr, keys := range addrToKeys {
|
||||
// parse keys so that we only have to do it once
|
||||
parsedKeys := make([]Address, 0, len(keys))
|
||||
for _, key := range keys {
|
||||
addr, err := ParseAddress(key)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing key '%s': %v", key, err)
|
||||
}
|
||||
parsedKeys = append(parsedKeys, addr.Normalize())
|
||||
for _, addr := range addrs {
|
||||
protocolToKeyWithParsedKeys := addrToProtocolToKeyWithParsedKeys[addr]
|
||||
|
||||
prots := make([]string, 0, len(protocolToKeyWithParsedKeys))
|
||||
for prot := range protocolToKeyWithParsedKeys {
|
||||
prots = append(prots, prot)
|
||||
}
|
||||
sbmap[addr] = append(sbmap[addr], serverBlock{
|
||||
block: caddyfile.ServerBlock{
|
||||
Keys: keys,
|
||||
Segments: sblock.block.Segments,
|
||||
},
|
||||
pile: sblock.pile,
|
||||
keys: parsedKeys,
|
||||
sort.Strings(prots)
|
||||
|
||||
protocolToServerBlocks, ok := addrToProtocolToServerBlocks[addr]
|
||||
if !ok {
|
||||
protocolToServerBlocks = map[string][]serverBlock{}
|
||||
addrToProtocolToServerBlocks[addr] = protocolToServerBlocks
|
||||
}
|
||||
|
||||
for _, prot := range prots {
|
||||
keyWithParsedKeys := protocolToKeyWithParsedKeys[prot]
|
||||
|
||||
keys := make([]caddyfile.Token, len(keyWithParsedKeys))
|
||||
parsedKeys := make([]Address, len(keyWithParsedKeys))
|
||||
|
||||
for k, keyWithParsedKey := range keyWithParsedKeys {
|
||||
keys[k] = keyWithParsedKey.key
|
||||
parsedKeys[k] = keyWithParsedKey.parsedKey
|
||||
}
|
||||
|
||||
protocolToServerBlocks[prot] = append(protocolToServerBlocks[prot], serverBlock{
|
||||
block: caddyfile.ServerBlock{
|
||||
Keys: keys,
|
||||
Segments: sblock.block.Segments,
|
||||
},
|
||||
pile: sblock.pile,
|
||||
parsedKeys: parsedKeys,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return addrToProtocolToServerBlocks, nil
|
||||
}
|
||||
|
||||
// consolidateAddrMappings eliminates repetition of identical server blocks in a mapping of
|
||||
// single listener addresses to protocols to lists of server blocks. Since multiple addresses
|
||||
// may serve multiple protocols to identical sites (server block contents), this function turns
|
||||
// a 1:many mapping into a many:many mapping. Server block contents (tokens) must be
|
||||
// exactly identical so that reflect.DeepEqual returns true in order for the addresses to be combined.
|
||||
// Identical entries are deleted from the addrToServerBlocks map. Essentially, each pairing (each
|
||||
// association from multiple addresses to multiple server blocks; i.e. each element of
|
||||
// the returned slice) becomes a server definition in the output JSON.
|
||||
func (st *ServerType) consolidateAddrMappings(addrToProtocolToServerBlocks map[string]map[string][]serverBlock) []sbAddrAssociation {
|
||||
sbaddrs := make([]sbAddrAssociation, 0, len(addrToProtocolToServerBlocks))
|
||||
|
||||
addrs := make([]string, 0, len(addrToProtocolToServerBlocks))
|
||||
for addr := range addrToProtocolToServerBlocks {
|
||||
addrs = append(addrs, addr)
|
||||
}
|
||||
sort.Strings(addrs)
|
||||
|
||||
for _, addr := range addrs {
|
||||
protocolToServerBlocks := addrToProtocolToServerBlocks[addr]
|
||||
|
||||
prots := make([]string, 0, len(protocolToServerBlocks))
|
||||
for prot := range protocolToServerBlocks {
|
||||
prots = append(prots, prot)
|
||||
}
|
||||
sort.Strings(prots)
|
||||
|
||||
for _, prot := range prots {
|
||||
serverBlocks := protocolToServerBlocks[prot]
|
||||
|
||||
// now find other addresses that map to identical
|
||||
// server blocks and add them to our map of listener
|
||||
// addresses and protocols, while removing them from
|
||||
// the original map
|
||||
listeners := map[string]map[string]struct{}{}
|
||||
|
||||
for otherAddr, otherProtocolToServerBlocks := range addrToProtocolToServerBlocks {
|
||||
for otherProt, otherServerBlocks := range otherProtocolToServerBlocks {
|
||||
if addr == otherAddr && prot == otherProt || reflect.DeepEqual(serverBlocks, otherServerBlocks) {
|
||||
listener, ok := listeners[otherAddr]
|
||||
if !ok {
|
||||
listener = map[string]struct{}{}
|
||||
listeners[otherAddr] = listener
|
||||
}
|
||||
listener[otherProt] = struct{}{}
|
||||
delete(otherProtocolToServerBlocks, otherProt)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
addresses := make([]string, 0, len(listeners))
|
||||
for lnAddr := range listeners {
|
||||
addresses = append(addresses, lnAddr)
|
||||
}
|
||||
sort.Strings(addresses)
|
||||
|
||||
addressesWithProtocols := make([]addressWithProtocols, 0, len(listeners))
|
||||
|
||||
for _, lnAddr := range addresses {
|
||||
lnProts := listeners[lnAddr]
|
||||
prots := make([]string, 0, len(lnProts))
|
||||
for prot := range lnProts {
|
||||
prots = append(prots, prot)
|
||||
}
|
||||
sort.Strings(prots)
|
||||
|
||||
addressesWithProtocols = append(addressesWithProtocols, addressWithProtocols{
|
||||
address: lnAddr,
|
||||
protocols: prots,
|
||||
})
|
||||
}
|
||||
|
||||
sbaddrs = append(sbaddrs, sbAddrAssociation{
|
||||
addressesWithProtocols: addressesWithProtocols,
|
||||
serverBlocks: serverBlocks,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return sbmap, nil
|
||||
}
|
||||
|
||||
// consolidateAddrMappings eliminates repetition of identical server blocks in a mapping of
|
||||
// single listener addresses to lists of server blocks. Since multiple addresses may serve
|
||||
// identical sites (server block contents), this function turns a 1:many mapping into a
|
||||
// many:many mapping. Server block contents (tokens) must be exactly identical so that
|
||||
// reflect.DeepEqual returns true in order for the addresses to be combined. Identical
|
||||
// entries are deleted from the addrToServerBlocks map. Essentially, each pairing (each
|
||||
// association from multiple addresses to multiple server blocks; i.e. each element of
|
||||
// the returned slice) becomes a server definition in the output JSON.
|
||||
func (st *ServerType) consolidateAddrMappings(addrToServerBlocks map[string][]serverBlock) []sbAddrAssociation {
|
||||
sbaddrs := make([]sbAddrAssociation, 0, len(addrToServerBlocks))
|
||||
for addr, sblocks := range addrToServerBlocks {
|
||||
// we start with knowing that at least this address
|
||||
// maps to these server blocks
|
||||
a := sbAddrAssociation{
|
||||
addresses: []string{addr},
|
||||
serverBlocks: sblocks,
|
||||
}
|
||||
|
||||
// now find other addresses that map to identical
|
||||
// server blocks and add them to our list of
|
||||
// addresses, while removing them from the map
|
||||
for otherAddr, otherSblocks := range addrToServerBlocks {
|
||||
if addr == otherAddr {
|
||||
continue
|
||||
}
|
||||
if reflect.DeepEqual(sblocks, otherSblocks) {
|
||||
a.addresses = append(a.addresses, otherAddr)
|
||||
delete(addrToServerBlocks, otherAddr)
|
||||
}
|
||||
}
|
||||
|
||||
sbaddrs = append(sbaddrs, a)
|
||||
}
|
||||
return sbaddrs
|
||||
}
|
||||
|
||||
func (st *ServerType) listenerAddrsForServerBlockKey(sblock serverBlock, key string,
|
||||
options map[string]interface{}) ([]string, error) {
|
||||
addr, err := ParseAddress(key)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing key: %v", err)
|
||||
// listenersForServerBlockAddress essentially converts the Caddyfile site addresses to a map from
|
||||
// Caddy listener addresses and the protocols to serve them with to the parsed address for each server block.
|
||||
func (st *ServerType) listenersForServerBlockAddress(sblock serverBlock, addr Address,
|
||||
options map[string]any,
|
||||
) (map[string]map[string]struct{}, error) {
|
||||
switch addr.Scheme {
|
||||
case "wss":
|
||||
return nil, fmt.Errorf("the scheme wss:// is only supported in browsers; use https:// instead")
|
||||
case "ws":
|
||||
return nil, fmt.Errorf("the scheme ws:// is only supported in browsers; use http:// instead")
|
||||
case "https", "http", "":
|
||||
// Do nothing or handle the valid schemes
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported URL scheme %s://", addr.Scheme)
|
||||
}
|
||||
addr = addr.Normalize()
|
||||
|
||||
// figure out the HTTP and HTTPS ports; either
|
||||
// use defaults, or override with user config
|
||||
|
@ -196,36 +303,58 @@ func (st *ServerType) listenerAddrsForServerBlockKey(sblock serverBlock, key str
|
|||
|
||||
// error if scheme and port combination violate convention
|
||||
if (addr.Scheme == "http" && lnPort == httpsPort) || (addr.Scheme == "https" && lnPort == httpPort) {
|
||||
return nil, fmt.Errorf("[%s] scheme and port violate convention", key)
|
||||
return nil, fmt.Errorf("[%s] scheme and port violate convention", addr.String())
|
||||
}
|
||||
|
||||
// the bind directive specifies hosts, but is optional
|
||||
lnHosts := make([]string, 0, len(sblock.pile))
|
||||
// the bind directive specifies hosts (and potentially network), and the protocols to serve them with, but is optional
|
||||
lnCfgVals := make([]addressesWithProtocols, 0, len(sblock.pile["bind"]))
|
||||
for _, cfgVal := range sblock.pile["bind"] {
|
||||
lnHosts = append(lnHosts, cfgVal.Value.([]string)...)
|
||||
if val, ok := cfgVal.Value.(addressesWithProtocols); ok {
|
||||
lnCfgVals = append(lnCfgVals, val)
|
||||
}
|
||||
}
|
||||
if len(lnHosts) == 0 {
|
||||
lnHosts = []string{""}
|
||||
}
|
||||
|
||||
// use a map to prevent duplication
|
||||
listeners := make(map[string]struct{})
|
||||
for _, host := range lnHosts {
|
||||
addr, err := caddy.ParseNetworkAddress(host)
|
||||
if err == nil && addr.IsUnixNetwork() {
|
||||
listeners[host] = struct{}{}
|
||||
if len(lnCfgVals) == 0 {
|
||||
if defaultBindValues, ok := options["default_bind"].([]ConfigValue); ok {
|
||||
for _, defaultBindValue := range defaultBindValues {
|
||||
lnCfgVals = append(lnCfgVals, defaultBindValue.Value.(addressesWithProtocols))
|
||||
}
|
||||
} else {
|
||||
listeners[net.JoinHostPort(host, lnPort)] = struct{}{}
|
||||
lnCfgVals = []addressesWithProtocols{{
|
||||
addresses: []string{""},
|
||||
protocols: nil,
|
||||
}}
|
||||
}
|
||||
}
|
||||
|
||||
// now turn map into list
|
||||
listenersList := make([]string, 0, len(listeners))
|
||||
for lnStr := range listeners {
|
||||
listenersList = append(listenersList, lnStr)
|
||||
// use a map to prevent duplication
|
||||
listeners := map[string]map[string]struct{}{}
|
||||
for _, lnCfgVal := range lnCfgVals {
|
||||
for _, lnAddr := range lnCfgVal.addresses {
|
||||
lnNetw, lnHost, _, err := caddy.SplitNetworkAddress(lnAddr)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("splitting listener address: %v", err)
|
||||
}
|
||||
networkAddr, err := caddy.ParseNetworkAddress(caddy.JoinNetworkAddress(lnNetw, lnHost, lnPort))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing network address: %v", err)
|
||||
}
|
||||
if _, ok := listeners[addr.String()]; !ok {
|
||||
listeners[networkAddr.String()] = map[string]struct{}{}
|
||||
}
|
||||
for _, protocol := range lnCfgVal.protocols {
|
||||
listeners[networkAddr.String()][protocol] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return listenersList, nil
|
||||
return listeners, nil
|
||||
}
|
||||
|
||||
// addressesWithProtocols associates a list of listen addresses
|
||||
// with a list of protocols to serve them with
|
||||
type addressesWithProtocols struct {
|
||||
addresses []string
|
||||
protocols []string
|
||||
}
|
||||
|
||||
// Address represents a site address. It contains
|
||||
|
@ -328,8 +457,10 @@ func (a Address) Normalize() Address {
|
|||
|
||||
// ensure host is normalized if it's an IP address
|
||||
host := strings.TrimSpace(a.Host)
|
||||
if ip := net.ParseIP(host); ip != nil {
|
||||
host = ip.String()
|
||||
if ip, err := netip.ParseAddr(host); err == nil {
|
||||
if ip.Is6() && !ip.Is4() && !ip.Is4In6() {
|
||||
host = ip.String()
|
||||
}
|
||||
}
|
||||
|
||||
return Address{
|
||||
|
@ -341,28 +472,6 @@ func (a Address) Normalize() Address {
|
|||
}
|
||||
}
|
||||
|
||||
// Key returns a string form of a, much like String() does, but this
|
||||
// method doesn't add anything default that wasn't in the original.
|
||||
func (a Address) Key() string {
|
||||
res := ""
|
||||
if a.Scheme != "" {
|
||||
res += a.Scheme + "://"
|
||||
}
|
||||
if a.Host != "" {
|
||||
res += a.Host
|
||||
}
|
||||
// insert port only if the original has its own explicit port
|
||||
if a.Port != "" &&
|
||||
len(a.Original) >= len(res) &&
|
||||
strings.HasPrefix(a.Original[len(res):], ":"+a.Port) {
|
||||
res += ":" + a.Port
|
||||
}
|
||||
if a.Path != "" {
|
||||
res += a.Path
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// lowerExceptPlaceholders lowercases s except within
|
||||
// placeholders (substrings in non-escaped '{ }' spans).
|
||||
// See https://github.com/caddyserver/caddy/issues/3264
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// +build gofuzz
|
||||
//go:build gofuzz
|
||||
|
||||
package httpcaddyfile
|
||||
|
||||
|
|
|
@ -106,67 +106,128 @@ func TestAddressString(t *testing.T) {
|
|||
func TestKeyNormalization(t *testing.T) {
|
||||
testCases := []struct {
|
||||
input string
|
||||
expect string
|
||||
expect Address
|
||||
}{
|
||||
{
|
||||
input: "example.com",
|
||||
expect: "example.com",
|
||||
input: "example.com",
|
||||
expect: Address{
|
||||
Host: "example.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "http://host:1234/path",
|
||||
expect: "http://host:1234/path",
|
||||
input: "http://host:1234/path",
|
||||
expect: Address{
|
||||
Scheme: "http",
|
||||
Host: "host",
|
||||
Port: "1234",
|
||||
Path: "/path",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "HTTP://A/ABCDEF",
|
||||
expect: "http://a/ABCDEF",
|
||||
input: "HTTP://A/ABCDEF",
|
||||
expect: Address{
|
||||
Scheme: "http",
|
||||
Host: "a",
|
||||
Path: "/ABCDEF",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "A/ABCDEF",
|
||||
expect: "a/ABCDEF",
|
||||
input: "A/ABCDEF",
|
||||
expect: Address{
|
||||
Host: "a",
|
||||
Path: "/ABCDEF",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "A:2015/Path",
|
||||
expect: "a:2015/Path",
|
||||
input: "A:2015/Path",
|
||||
expect: Address{
|
||||
Host: "a",
|
||||
Port: "2015",
|
||||
Path: "/Path",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "sub.{env.MY_DOMAIN}",
|
||||
expect: "sub.{env.MY_DOMAIN}",
|
||||
input: "sub.{env.MY_DOMAIN}",
|
||||
expect: Address{
|
||||
Host: "sub.{env.MY_DOMAIN}",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "sub.ExAmPle",
|
||||
expect: "sub.example",
|
||||
input: "sub.ExAmPle",
|
||||
expect: Address{
|
||||
Host: "sub.example",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "sub.\\{env.MY_DOMAIN\\}",
|
||||
expect: "sub.\\{env.my_domain\\}",
|
||||
input: "sub.\\{env.MY_DOMAIN\\}",
|
||||
expect: Address{
|
||||
Host: "sub.\\{env.my_domain\\}",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "sub.{env.MY_DOMAIN}.com",
|
||||
expect: "sub.{env.MY_DOMAIN}.com",
|
||||
input: "sub.{env.MY_DOMAIN}.com",
|
||||
expect: Address{
|
||||
Host: "sub.{env.MY_DOMAIN}.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: ":80",
|
||||
expect: ":80",
|
||||
input: ":80",
|
||||
expect: Address{
|
||||
Port: "80",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: ":443",
|
||||
expect: ":443",
|
||||
input: ":443",
|
||||
expect: Address{
|
||||
Port: "443",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: ":1234",
|
||||
expect: ":1234",
|
||||
input: ":1234",
|
||||
expect: Address{
|
||||
Port: "1234",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "",
|
||||
expect: "",
|
||||
expect: Address{},
|
||||
},
|
||||
{
|
||||
input: ":",
|
||||
expect: "",
|
||||
expect: Address{},
|
||||
},
|
||||
{
|
||||
input: "[::]",
|
||||
expect: "::",
|
||||
input: "[::]",
|
||||
expect: Address{
|
||||
Host: "::",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "127.0.0.1",
|
||||
expect: Address{
|
||||
Host: "127.0.0.1",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "[2001:db8:85a3:8d3:1319:8a2e:370:7348]:1234",
|
||||
expect: Address{
|
||||
Host: "2001:db8:85a3:8d3:1319:8a2e:370:7348",
|
||||
Port: "1234",
|
||||
},
|
||||
},
|
||||
{
|
||||
// IPv4 address in IPv6 form (#4381)
|
||||
input: "[::ffff:cff4:e77d]:1234",
|
||||
expect: Address{
|
||||
Host: "::ffff:cff4:e77d",
|
||||
Port: "1234",
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "::ffff:cff4:e77d",
|
||||
expect: Address{
|
||||
Host: "::ffff:cff4:e77d",
|
||||
},
|
||||
},
|
||||
}
|
||||
for i, tc := range testCases {
|
||||
|
@ -175,9 +236,18 @@ func TestKeyNormalization(t *testing.T) {
|
|||
t.Errorf("Test %d: Parsing address '%s': %v", i, tc.input, err)
|
||||
continue
|
||||
}
|
||||
if actual := addr.Normalize().Key(); actual != tc.expect {
|
||||
t.Errorf("Test %d: Input '%s': Expected '%s' but got '%s'", i, tc.input, tc.expect, actual)
|
||||
actual := addr.Normalize()
|
||||
if actual.Scheme != tc.expect.Scheme {
|
||||
t.Errorf("Test %d: Input '%s': Expected Scheme='%s' but got Scheme='%s'", i, tc.input, tc.expect.Scheme, actual.Scheme)
|
||||
}
|
||||
if actual.Host != tc.expect.Host {
|
||||
t.Errorf("Test %d: Input '%s': Expected Host='%s' but got Host='%s'", i, tc.input, tc.expect.Host, actual.Host)
|
||||
}
|
||||
if actual.Port != tc.expect.Port {
|
||||
t.Errorf("Test %d: Input '%s': Expected Port='%s' but got Port='%s'", i, tc.input, tc.expect.Port, actual.Port)
|
||||
}
|
||||
if actual.Path != tc.expect.Path {
|
||||
t.Errorf("Test %d: Input '%s': Expected Path='%s' but got Path='%s'", i, tc.input, tc.expect.Path, actual.Path)
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load diff
369
caddyconfig/httpcaddyfile/builtins_test.go
Normal file
369
caddyconfig/httpcaddyfile/builtins_test.go
Normal file
|
@ -0,0 +1,369 @@
|
|||
package httpcaddyfile
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
|
||||
_ "github.com/caddyserver/caddy/v2/modules/logging"
|
||||
)
|
||||
|
||||
func TestLogDirectiveSyntax(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
output string
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
input: `:8080 {
|
||||
log
|
||||
}
|
||||
`,
|
||||
output: `{"apps":{"http":{"servers":{"srv0":{"listen":[":8080"],"logs":{}}}}}}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
log {
|
||||
core mock
|
||||
output file foo.log
|
||||
}
|
||||
}
|
||||
`,
|
||||
output: `{"logging":{"logs":{"default":{"exclude":["http.log.access.log0"]},"log0":{"writer":{"filename":"foo.log","output":"file"},"core":{"module":"mock"},"include":["http.log.access.log0"]}}},"apps":{"http":{"servers":{"srv0":{"listen":[":8080"],"logs":{"default_logger_name":"log0"}}}}}}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
log {
|
||||
format filter {
|
||||
wrap console
|
||||
fields {
|
||||
request>remote_ip ip_mask {
|
||||
ipv4 24
|
||||
ipv6 32
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
`,
|
||||
output: `{"logging":{"logs":{"default":{"exclude":["http.log.access.log0"]},"log0":{"encoder":{"fields":{"request\u003eremote_ip":{"filter":"ip_mask","ipv4_cidr":24,"ipv6_cidr":32}},"format":"filter","wrap":{"format":"console"}},"include":["http.log.access.log0"]}}},"apps":{"http":{"servers":{"srv0":{"listen":[":8080"],"logs":{"default_logger_name":"log0"}}}}}}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
log name-override {
|
||||
core mock
|
||||
output file foo.log
|
||||
}
|
||||
}
|
||||
`,
|
||||
output: `{"logging":{"logs":{"default":{"exclude":["http.log.access.name-override"]},"name-override":{"writer":{"filename":"foo.log","output":"file"},"core":{"module":"mock"},"include":["http.log.access.name-override"]}}},"apps":{"http":{"servers":{"srv0":{"listen":[":8080"],"logs":{"default_logger_name":"name-override"}}}}}}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
log {
|
||||
sampling {
|
||||
interval 2
|
||||
first 3
|
||||
thereafter 4
|
||||
}
|
||||
}
|
||||
}
|
||||
`,
|
||||
output: `{"logging":{"logs":{"default":{"exclude":["http.log.access.log0"]},"log0":{"sampling":{"interval":2,"first":3,"thereafter":4},"include":["http.log.access.log0"]}}},"apps":{"http":{"servers":{"srv0":{"listen":[":8080"],"logs":{"default_logger_name":"log0"}}}}}}`,
|
||||
expectError: false,
|
||||
},
|
||||
} {
|
||||
|
||||
adapter := caddyfile.Adapter{
|
||||
ServerType: ServerType{},
|
||||
}
|
||||
|
||||
out, _, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if err != nil != tc.expectError {
|
||||
t.Errorf("Test %d error expectation failed Expected: %v, got %s", i, tc.expectError, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if string(out) != tc.output {
|
||||
t.Errorf("Test %d error output mismatch Expected: %s, got %s", i, tc.output, out)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRedirDirectiveSyntax(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
input: `:8080 {
|
||||
redir :8081
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir * :8081
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir /api/* :8081 300
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir :8081 300
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir /api/* :8081 399
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir :8081 399
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir /old.html /new.html
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir /old.html /new.html temporary
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir https://example.com{uri} permanent
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir /old.html /new.html permanent
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir /old.html /new.html html
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
// this is now allowed so a Location header
|
||||
// can be written and consumed by JS
|
||||
// in the case of XHR requests
|
||||
input: `:8080 {
|
||||
redir * :8081 401
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir * :8081 402
|
||||
}`,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir * :8081 {http.reverse_proxy.status_code}
|
||||
}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir /old.html /new.html htlm
|
||||
}`,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir * :8081 200
|
||||
}`,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir * :8081 temp
|
||||
}`,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir * :8081 perm
|
||||
}`,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
input: `:8080 {
|
||||
redir * :8081 php
|
||||
}`,
|
||||
expectError: true,
|
||||
},
|
||||
} {
|
||||
|
||||
adapter := caddyfile.Adapter{
|
||||
ServerType: ServerType{},
|
||||
}
|
||||
|
||||
_, _, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if err != nil != tc.expectError {
|
||||
t.Errorf("Test %d error expectation failed Expected: %v, got %s", i, tc.expectError, err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestImportErrorLine(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
errorFunc func(err error) bool
|
||||
}{
|
||||
{
|
||||
input: `(t1) {
|
||||
abort {args[:]}
|
||||
}
|
||||
:8080 {
|
||||
import t1
|
||||
import t1 true
|
||||
}`,
|
||||
errorFunc: func(err error) bool {
|
||||
return err != nil && strings.Contains(err.Error(), "Caddyfile:6 (import t1)")
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `(t1) {
|
||||
abort {args[:]}
|
||||
}
|
||||
:8080 {
|
||||
import t1 true
|
||||
}`,
|
||||
errorFunc: func(err error) bool {
|
||||
return err != nil && strings.Contains(err.Error(), "Caddyfile:5 (import t1)")
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `
|
||||
import testdata/import_variadic_snippet.txt
|
||||
:8080 {
|
||||
import t1 true
|
||||
}`,
|
||||
errorFunc: func(err error) bool {
|
||||
return err == nil
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `
|
||||
import testdata/import_variadic_with_import.txt
|
||||
:8080 {
|
||||
import t1 true
|
||||
import t2 true
|
||||
}`,
|
||||
errorFunc: func(err error) bool {
|
||||
return err == nil
|
||||
},
|
||||
},
|
||||
} {
|
||||
adapter := caddyfile.Adapter{
|
||||
ServerType: ServerType{},
|
||||
}
|
||||
|
||||
_, _, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if !tc.errorFunc(err) {
|
||||
t.Errorf("Test %d error expectation failed, got %s", i, err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNestedImport(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
errorFunc func(err error) bool
|
||||
}{
|
||||
{
|
||||
input: `(t1) {
|
||||
respond {args[0]} {args[1]}
|
||||
}
|
||||
|
||||
(t2) {
|
||||
import t1 {args[0]} 202
|
||||
}
|
||||
|
||||
:8080 {
|
||||
handle {
|
||||
import t2 "foobar"
|
||||
}
|
||||
}`,
|
||||
errorFunc: func(err error) bool {
|
||||
return err == nil
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `(t1) {
|
||||
respond {args[:]}
|
||||
}
|
||||
|
||||
(t2) {
|
||||
import t1 {args[0]} {args[1]}
|
||||
}
|
||||
|
||||
:8080 {
|
||||
handle {
|
||||
import t2 "foobar" 202
|
||||
}
|
||||
}`,
|
||||
errorFunc: func(err error) bool {
|
||||
return err == nil
|
||||
},
|
||||
},
|
||||
{
|
||||
input: `(t1) {
|
||||
respond {args[0]} {args[1]}
|
||||
}
|
||||
|
||||
(t2) {
|
||||
import t1 {args[:]}
|
||||
}
|
||||
|
||||
:8080 {
|
||||
handle {
|
||||
import t2 "foobar" 202
|
||||
}
|
||||
}`,
|
||||
errorFunc: func(err error) bool {
|
||||
return err == nil
|
||||
},
|
||||
},
|
||||
} {
|
||||
adapter := caddyfile.Adapter{
|
||||
ServerType: ServerType{},
|
||||
}
|
||||
|
||||
_, _, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if !tc.errorFunc(err) {
|
||||
t.Errorf("Test %d error expectation failed, got %s", i, err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
|
@ -17,6 +17,7 @@ package httpcaddyfile
|
|||
import (
|
||||
"encoding/json"
|
||||
"net"
|
||||
"slices"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
@ -27,54 +28,78 @@ import (
|
|||
"github.com/caddyserver/caddy/v2/modules/caddyhttp"
|
||||
)
|
||||
|
||||
// directiveOrder specifies the order
|
||||
// to apply directives in HTTP routes.
|
||||
// defaultDirectiveOrder specifies the default order
|
||||
// to apply directives in HTTP routes. This must only
|
||||
// consist of directives that are included in Caddy's
|
||||
// standard distribution.
|
||||
//
|
||||
// The root directive goes first in case rewrites or
|
||||
// redirects depend on existence of files, i.e. the
|
||||
// file matcher, which must know the root first.
|
||||
// e.g. The 'root' directive goes near the start in
|
||||
// case rewrites or redirects depend on existence of
|
||||
// files, i.e. the file matcher, which must know the
|
||||
// root first.
|
||||
//
|
||||
// The header directive goes second so that headers
|
||||
// can be manipulated before doing redirects.
|
||||
var directiveOrder = []string{
|
||||
// e.g. The 'header' directive goes before 'redir' so
|
||||
// that headers can be manipulated before doing redirects.
|
||||
//
|
||||
// e.g. The 'respond' directive is near the end because it
|
||||
// writes a response and terminates the middleware chain.
|
||||
var defaultDirectiveOrder = []string{
|
||||
"tracing",
|
||||
|
||||
// set variables that may be used by other directives
|
||||
"map",
|
||||
"vars",
|
||||
"fs",
|
||||
"root",
|
||||
"log_append",
|
||||
"skip_log", // TODO: deprecated, renamed to log_skip
|
||||
"log_skip",
|
||||
"log_name",
|
||||
|
||||
"header",
|
||||
"copy_response_headers", // only in reverse_proxy's handle_response
|
||||
"request_body",
|
||||
|
||||
"redir",
|
||||
"rewrite",
|
||||
|
||||
// URI manipulation
|
||||
// incoming request manipulation
|
||||
"method",
|
||||
"rewrite",
|
||||
"uri",
|
||||
"try_files",
|
||||
|
||||
// middleware handlers; some wrap responses
|
||||
"basicauth",
|
||||
"basicauth", // TODO: deprecated, renamed to basic_auth
|
||||
"basic_auth",
|
||||
"forward_auth",
|
||||
"request_header",
|
||||
"encode",
|
||||
"push",
|
||||
"intercept",
|
||||
"templates",
|
||||
|
||||
// special routing directives
|
||||
// special routing & dispatching directives
|
||||
"invoke",
|
||||
"handle",
|
||||
"handle_path",
|
||||
"route",
|
||||
|
||||
// handlers that typically respond to requests
|
||||
"abort",
|
||||
"error",
|
||||
"copy_response", // only in reverse_proxy's handle_response
|
||||
"respond",
|
||||
"metrics",
|
||||
"reverse_proxy",
|
||||
"php_fastcgi",
|
||||
"file_server",
|
||||
"acme_server",
|
||||
}
|
||||
|
||||
// directiveIsOrdered returns true if dir is
|
||||
// a known, ordered (sorted) directive.
|
||||
func directiveIsOrdered(dir string) bool {
|
||||
for _, d := range directiveOrder {
|
||||
if d == dir {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
// directiveOrder specifies the order to apply directives
|
||||
// in HTTP routes, after being modified by either the
|
||||
// plugins or by the user via the "order" global option.
|
||||
var directiveOrder = defaultDirectiveOrder
|
||||
|
||||
// RegisterDirective registers a unique directive dir with an
|
||||
// associated unmarshaling (setup) function. When directive dir
|
||||
|
@ -97,20 +122,11 @@ func RegisterHandlerDirective(dir string, setupFunc UnmarshalHandlerFunc) {
|
|||
return nil, h.ArgErr()
|
||||
}
|
||||
|
||||
matcherSet, ok, err := h.MatcherToken()
|
||||
matcherSet, err := h.ExtractMatcherSet()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ok {
|
||||
// strip matcher token; we don't need to
|
||||
// use the return value here because a
|
||||
// new dispenser should have been made
|
||||
// solely for this directive's tokens,
|
||||
// with no other uses of same slice
|
||||
h.Dispenser.Delete()
|
||||
}
|
||||
|
||||
h.Dispenser.Reset() // pretend this lookahead never happened
|
||||
val, err := setupFunc(h)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -120,13 +136,71 @@ func RegisterHandlerDirective(dir string, setupFunc UnmarshalHandlerFunc) {
|
|||
})
|
||||
}
|
||||
|
||||
// RegisterDirectiveOrder registers the default order for a
|
||||
// directive from a plugin.
|
||||
//
|
||||
// This is useful when a plugin has a well-understood place
|
||||
// it should run in the middleware pipeline, and it allows
|
||||
// users to avoid having to define the order themselves.
|
||||
//
|
||||
// The directive dir may be placed in the position relative
|
||||
// to ('before' or 'after') a directive included in Caddy's
|
||||
// standard distribution. It cannot be relative to another
|
||||
// plugin's directive.
|
||||
//
|
||||
// EXPERIMENTAL: This API may change or be removed.
|
||||
func RegisterDirectiveOrder(dir string, position Positional, standardDir string) {
|
||||
// check if directive was already ordered
|
||||
if slices.Contains(directiveOrder, dir) {
|
||||
panic("directive '" + dir + "' already ordered")
|
||||
}
|
||||
|
||||
if position != Before && position != After {
|
||||
panic("the 2nd argument must be either 'before' or 'after', got '" + position + "'")
|
||||
}
|
||||
|
||||
// check if directive exists in standard distribution, since
|
||||
// we can't allow plugins to depend on one another; we can't
|
||||
// guarantee the order that plugins are loaded in.
|
||||
foundStandardDir := slices.Contains(defaultDirectiveOrder, standardDir)
|
||||
if !foundStandardDir {
|
||||
panic("the 3rd argument '" + standardDir + "' must be a directive that exists in the standard distribution of Caddy")
|
||||
}
|
||||
|
||||
// insert directive into proper position
|
||||
newOrder := directiveOrder
|
||||
for i, d := range newOrder {
|
||||
if d != standardDir {
|
||||
continue
|
||||
}
|
||||
if position == Before {
|
||||
newOrder = append(newOrder[:i], append([]string{dir}, newOrder[i:]...)...)
|
||||
} else if position == After {
|
||||
newOrder = append(newOrder[:i+1], append([]string{dir}, newOrder[i+1:]...)...)
|
||||
}
|
||||
break
|
||||
}
|
||||
directiveOrder = newOrder
|
||||
}
|
||||
|
||||
// RegisterGlobalOption registers a unique global option opt with
|
||||
// an associated unmarshaling (setup) function. When the global
|
||||
// option opt is encountered in a Caddyfile, setupFunc will be
|
||||
// called to unmarshal its tokens.
|
||||
func RegisterGlobalOption(opt string, setupFunc UnmarshalGlobalFunc) {
|
||||
if _, ok := registeredGlobalOptions[opt]; ok {
|
||||
panic("global option " + opt + " already registered")
|
||||
}
|
||||
registeredGlobalOptions[opt] = setupFunc
|
||||
}
|
||||
|
||||
// Helper is a type which helps setup a value from
|
||||
// Caddyfile tokens.
|
||||
type Helper struct {
|
||||
*caddyfile.Dispenser
|
||||
// State stores intermediate variables during caddyfile adaptation.
|
||||
State map[string]interface{}
|
||||
options map[string]interface{}
|
||||
State map[string]any
|
||||
options map[string]any
|
||||
warnings *[]caddyconfig.Warning
|
||||
matcherDefs map[string]caddy.ModuleMap
|
||||
parentBlock caddyfile.ServerBlock
|
||||
|
@ -134,7 +208,7 @@ type Helper struct {
|
|||
}
|
||||
|
||||
// Option gets the option keyed by name.
|
||||
func (h Helper) Option(name string) interface{} {
|
||||
func (h Helper) Option(name string) any {
|
||||
return h.options[name]
|
||||
}
|
||||
|
||||
|
@ -154,11 +228,12 @@ func (h Helper) Caddyfiles() []string {
|
|||
for file := range files {
|
||||
filesSlice = append(filesSlice, file)
|
||||
}
|
||||
sort.Strings(filesSlice)
|
||||
return filesSlice
|
||||
}
|
||||
|
||||
// JSON converts val into JSON. Any errors are added to warnings.
|
||||
func (h Helper) JSON(val interface{}) json.RawMessage {
|
||||
func (h Helper) JSON(val any) json.RawMessage {
|
||||
return caddyconfig.JSON(val, h.warnings)
|
||||
}
|
||||
|
||||
|
@ -184,7 +259,12 @@ func (h Helper) ExtractMatcherSet() (caddy.ModuleMap, error) {
|
|||
return nil, err
|
||||
}
|
||||
if hasMatcher {
|
||||
h.Dispenser.Delete() // strip matcher token
|
||||
// strip matcher token; we don't need to
|
||||
// use the return value here because a
|
||||
// new dispenser should have been made
|
||||
// solely for this directive's tokens,
|
||||
// with no other uses of same slice
|
||||
h.Dispenser.Delete()
|
||||
}
|
||||
h.Dispenser.Reset() // pretend this lookahead never happened
|
||||
return matcherSet, nil
|
||||
|
@ -192,7 +272,8 @@ func (h Helper) ExtractMatcherSet() (caddy.ModuleMap, error) {
|
|||
|
||||
// NewRoute returns config values relevant to creating a new HTTP route.
|
||||
func (h Helper) NewRoute(matcherSet caddy.ModuleMap,
|
||||
handler caddyhttp.MiddlewareHandler) []ConfigValue {
|
||||
handler caddyhttp.MiddlewareHandler,
|
||||
) []ConfigValue {
|
||||
mod, err := caddy.GetModule(caddy.GetModuleID(handler))
|
||||
if err != nil {
|
||||
*h.warnings = append(*h.warnings, caddyconfig.Warning{
|
||||
|
@ -244,10 +325,92 @@ func (h Helper) GroupRoutes(vals []ConfigValue) {
|
|||
}
|
||||
}
|
||||
|
||||
// NewBindAddresses returns config values relevant to adding
|
||||
// listener bind addresses to the config.
|
||||
func (h Helper) NewBindAddresses(addrs []string) []ConfigValue {
|
||||
return []ConfigValue{{Class: "bind", Value: addrs}}
|
||||
// WithDispenser returns a new instance based on d. All others Helper
|
||||
// fields are copied, so typically maps are shared with this new instance.
|
||||
func (h Helper) WithDispenser(d *caddyfile.Dispenser) Helper {
|
||||
h.Dispenser = d
|
||||
return h
|
||||
}
|
||||
|
||||
// ParseSegmentAsSubroute parses the segment such that its subdirectives
|
||||
// are themselves treated as directives, from which a subroute is built
|
||||
// and returned.
|
||||
func ParseSegmentAsSubroute(h Helper) (caddyhttp.MiddlewareHandler, error) {
|
||||
allResults, err := parseSegmentAsConfig(h)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return buildSubroute(allResults, h.groupCounter, true)
|
||||
}
|
||||
|
||||
// parseSegmentAsConfig parses the segment such that its subdirectives
|
||||
// are themselves treated as directives, including named matcher definitions,
|
||||
// and the raw Config structs are returned.
|
||||
func parseSegmentAsConfig(h Helper) ([]ConfigValue, error) {
|
||||
var allResults []ConfigValue
|
||||
|
||||
for h.Next() {
|
||||
// don't allow non-matcher args on the first line
|
||||
if h.NextArg() {
|
||||
return nil, h.ArgErr()
|
||||
}
|
||||
|
||||
// slice the linear list of tokens into top-level segments
|
||||
var segments []caddyfile.Segment
|
||||
for nesting := h.Nesting(); h.NextBlock(nesting); {
|
||||
segments = append(segments, h.NextSegment())
|
||||
}
|
||||
|
||||
// copy existing matcher definitions so we can augment
|
||||
// new ones that are defined only in this scope
|
||||
matcherDefs := make(map[string]caddy.ModuleMap, len(h.matcherDefs))
|
||||
for key, val := range h.matcherDefs {
|
||||
matcherDefs[key] = val
|
||||
}
|
||||
|
||||
// find and extract any embedded matcher definitions in this scope
|
||||
for i := 0; i < len(segments); i++ {
|
||||
seg := segments[i]
|
||||
if strings.HasPrefix(seg.Directive(), matcherPrefix) {
|
||||
// parse, then add the matcher to matcherDefs
|
||||
err := parseMatcherDefinitions(caddyfile.NewDispenser(seg), matcherDefs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// remove the matcher segment (consumed), then step back the loop
|
||||
segments = append(segments[:i], segments[i+1:]...)
|
||||
i--
|
||||
}
|
||||
}
|
||||
|
||||
// with matchers ready to go, evaluate each directive's segment
|
||||
for _, seg := range segments {
|
||||
dir := seg.Directive()
|
||||
dirFunc, ok := registeredDirectives[dir]
|
||||
if !ok {
|
||||
return nil, h.Errf("unrecognized directive: %s - are you sure your Caddyfile structure (nesting and braces) is correct?", dir)
|
||||
}
|
||||
|
||||
subHelper := h
|
||||
subHelper.Dispenser = caddyfile.NewDispenser(seg)
|
||||
subHelper.matcherDefs = matcherDefs
|
||||
|
||||
results, err := dirFunc(subHelper)
|
||||
if err != nil {
|
||||
return nil, h.Errf("parsing caddyfile tokens for '%s': %v", dir, err)
|
||||
}
|
||||
|
||||
dir = normalizeDirectiveName(dir)
|
||||
|
||||
for _, result := range results {
|
||||
result.directive = dir
|
||||
allResults = append(allResults, result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return allResults, nil
|
||||
}
|
||||
|
||||
// ConfigValue represents a value to be added to the final
|
||||
|
@ -265,7 +428,7 @@ type ConfigValue struct {
|
|||
// The value to be used when building the config.
|
||||
// Generally its type is associated with the
|
||||
// name of the Class.
|
||||
Value interface{}
|
||||
Value any
|
||||
|
||||
directive string
|
||||
}
|
||||
|
@ -276,120 +439,86 @@ func sortRoutes(routes []ConfigValue) {
|
|||
dirPositions[dir] = i
|
||||
}
|
||||
|
||||
// while we are sorting, we will need to decode a route's path matcher
|
||||
// in order to sub-sort by path length; we can amortize this operation
|
||||
// for efficiency by storing the decoded matchers in a slice
|
||||
decodedMatchers := make([]caddyhttp.MatchPath, len(routes))
|
||||
|
||||
sort.SliceStable(routes, func(i, j int) bool {
|
||||
// if the directives are different, just use the established directive order
|
||||
iDir, jDir := routes[i].directive, routes[j].directive
|
||||
if iDir == jDir {
|
||||
// directives are the same; sub-sort by path matcher length
|
||||
// if there's only one matcher set and one path (common case)
|
||||
iRoute, ok := routes[i].Value.(caddyhttp.Route)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
jRoute, ok := routes[j].Value.(caddyhttp.Route)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
// use already-decoded matcher, or decode if it's the first time seeing it
|
||||
iPM, jPM := decodedMatchers[i], decodedMatchers[j]
|
||||
if iPM == nil && len(iRoute.MatcherSetsRaw) == 1 {
|
||||
var pathMatcher caddyhttp.MatchPath
|
||||
_ = json.Unmarshal(iRoute.MatcherSetsRaw[0]["path"], &pathMatcher)
|
||||
decodedMatchers[i] = pathMatcher
|
||||
iPM = pathMatcher
|
||||
}
|
||||
if jPM == nil && len(jRoute.MatcherSetsRaw) == 1 {
|
||||
var pathMatcher caddyhttp.MatchPath
|
||||
_ = json.Unmarshal(jRoute.MatcherSetsRaw[0]["path"], &pathMatcher)
|
||||
decodedMatchers[j] = pathMatcher
|
||||
jPM = pathMatcher
|
||||
}
|
||||
|
||||
// sort by longer path (more specific) first; missing
|
||||
// path matchers are treated as zero-length paths
|
||||
var iPathLen, jPathLen int
|
||||
if iPM != nil {
|
||||
iPathLen = len(iPM[0])
|
||||
}
|
||||
if jPM != nil {
|
||||
jPathLen = len(jPM[0])
|
||||
}
|
||||
return iPathLen > jPathLen
|
||||
if iDir != jDir {
|
||||
return dirPositions[iDir] < dirPositions[jDir]
|
||||
}
|
||||
|
||||
return dirPositions[iDir] < dirPositions[jDir]
|
||||
})
|
||||
}
|
||||
|
||||
// parseSegmentAsSubroute parses the segment such that its subdirectives
|
||||
// are themselves treated as directives, from which a subroute is built
|
||||
// and returned.
|
||||
func parseSegmentAsSubroute(h Helper) (caddyhttp.MiddlewareHandler, error) {
|
||||
var allResults []ConfigValue
|
||||
|
||||
for h.Next() {
|
||||
// slice the linear list of tokens into top-level segments
|
||||
var segments []caddyfile.Segment
|
||||
for nesting := h.Nesting(); h.NextBlock(nesting); {
|
||||
segments = append(segments, h.NextSegment())
|
||||
// directives are the same; sub-sort by path matcher length if there's
|
||||
// only one matcher set and one path (this is a very common case and
|
||||
// usually -- but not always -- helpful/expected, oh well; user can
|
||||
// always take manual control of order using handler or route blocks)
|
||||
iRoute, ok := routes[i].Value.(caddyhttp.Route)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
jRoute, ok := routes[j].Value.(caddyhttp.Route)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
// copy existing matcher definitions so we can augment
|
||||
// new ones that are defined only in this scope
|
||||
matcherDefs := make(map[string]caddy.ModuleMap, len(h.matcherDefs))
|
||||
for key, val := range h.matcherDefs {
|
||||
matcherDefs[key] = val
|
||||
// decode the path matchers if there is just one matcher set
|
||||
var iPM, jPM caddyhttp.MatchPath
|
||||
if len(iRoute.MatcherSetsRaw) == 1 {
|
||||
_ = json.Unmarshal(iRoute.MatcherSetsRaw[0]["path"], &iPM)
|
||||
}
|
||||
if len(jRoute.MatcherSetsRaw) == 1 {
|
||||
_ = json.Unmarshal(jRoute.MatcherSetsRaw[0]["path"], &jPM)
|
||||
}
|
||||
|
||||
// find and extract any embedded matcher definitions in this scope
|
||||
for i, seg := range segments {
|
||||
if strings.HasPrefix(seg.Directive(), matcherPrefix) {
|
||||
err := parseMatcherDefinitions(caddyfile.NewDispenser(seg), matcherDefs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
// if there is only one path in the path matcher, sort by longer path
|
||||
// (more specific) first; missing path matchers or multi-matchers are
|
||||
// treated as zero-length paths
|
||||
var iPathLen, jPathLen int
|
||||
if len(iPM) == 1 {
|
||||
iPathLen = len(iPM[0])
|
||||
}
|
||||
if len(jPM) == 1 {
|
||||
jPathLen = len(jPM[0])
|
||||
}
|
||||
|
||||
sortByPath := func() bool {
|
||||
// we can only confidently compare path lengths if both
|
||||
// directives have a single path to match (issue #5037)
|
||||
if iPathLen > 0 && jPathLen > 0 {
|
||||
// if both paths are the same except for a trailing wildcard,
|
||||
// sort by the shorter path first (which is more specific)
|
||||
if strings.TrimSuffix(iPM[0], "*") == strings.TrimSuffix(jPM[0], "*") {
|
||||
return iPathLen < jPathLen
|
||||
}
|
||||
segments = append(segments[:i], segments[i+1:]...)
|
||||
|
||||
// sort most-specific (longest) path first
|
||||
return iPathLen > jPathLen
|
||||
}
|
||||
|
||||
// if both directives don't have a single path to compare,
|
||||
// sort whichever one has a matcher first; if both have
|
||||
// a matcher, sort equally (stable sort preserves order)
|
||||
return len(iRoute.MatcherSetsRaw) > 0 && len(jRoute.MatcherSetsRaw) == 0
|
||||
}()
|
||||
|
||||
// some directives involve setting values which can overwrite
|
||||
// each other, so it makes most sense to reverse the order so
|
||||
// that the least-specific matcher is first, allowing the last
|
||||
// matching one to win
|
||||
if iDir == "vars" {
|
||||
return !sortByPath
|
||||
}
|
||||
|
||||
// with matchers ready to go, evaluate each directive's segment
|
||||
for _, seg := range segments {
|
||||
dir := seg.Directive()
|
||||
dirFunc, ok := registeredDirectives[dir]
|
||||
if !ok {
|
||||
return nil, h.Errf("unrecognized directive: %s", dir)
|
||||
}
|
||||
|
||||
subHelper := h
|
||||
subHelper.Dispenser = caddyfile.NewDispenser(seg)
|
||||
subHelper.matcherDefs = matcherDefs
|
||||
|
||||
results, err := dirFunc(subHelper)
|
||||
if err != nil {
|
||||
return nil, h.Errf("parsing caddyfile tokens for '%s': %v", dir, err)
|
||||
}
|
||||
for _, result := range results {
|
||||
result.directive = dir
|
||||
allResults = append(allResults, result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return buildSubroute(allResults, h.groupCounter)
|
||||
// everything else is most-specific matcher first
|
||||
return sortByPath
|
||||
})
|
||||
}
|
||||
|
||||
// serverBlock pairs a Caddyfile server block with
|
||||
// a "pile" of config values, keyed by class name,
|
||||
// as well as its parsed keys for convenience.
|
||||
type serverBlock struct {
|
||||
block caddyfile.ServerBlock
|
||||
pile map[string][]ConfigValue // config values obtained from directives
|
||||
keys []Address
|
||||
block caddyfile.ServerBlock
|
||||
pile map[string][]ConfigValue // config values obtained from directives
|
||||
parsedKeys []Address
|
||||
}
|
||||
|
||||
// hostsFromKeys returns a list of all the non-empty hostnames found in
|
||||
|
@ -406,7 +535,7 @@ type serverBlock struct {
|
|||
func (sb serverBlock) hostsFromKeys(loggerMode bool) []string {
|
||||
// ensure each entry in our list is unique
|
||||
hostMap := make(map[string]struct{})
|
||||
for _, addr := range sb.keys {
|
||||
for _, addr := range sb.parsedKeys {
|
||||
if addr.Host == "" {
|
||||
if !loggerMode {
|
||||
// server block contains a key like ":443", i.e. the host portion
|
||||
|
@ -435,17 +564,53 @@ func (sb serverBlock) hostsFromKeys(loggerMode bool) []string {
|
|||
return sblockHosts
|
||||
}
|
||||
|
||||
func (sb serverBlock) hostsFromKeysNotHTTP(httpPort string) []string {
|
||||
// ensure each entry in our list is unique
|
||||
hostMap := make(map[string]struct{})
|
||||
for _, addr := range sb.parsedKeys {
|
||||
if addr.Host == "" {
|
||||
continue
|
||||
}
|
||||
if addr.Scheme != "http" && addr.Port != httpPort {
|
||||
hostMap[addr.Host] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
// convert map to slice
|
||||
sblockHosts := make([]string, 0, len(hostMap))
|
||||
for host := range hostMap {
|
||||
sblockHosts = append(sblockHosts, host)
|
||||
}
|
||||
|
||||
return sblockHosts
|
||||
}
|
||||
|
||||
// hasHostCatchAllKey returns true if sb has a key that
|
||||
// omits a host portion, i.e. it "catches all" hosts.
|
||||
func (sb serverBlock) hasHostCatchAllKey() bool {
|
||||
for _, addr := range sb.keys {
|
||||
if addr.Host == "" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
return slices.ContainsFunc(sb.parsedKeys, func(addr Address) bool {
|
||||
return addr.Host == ""
|
||||
})
|
||||
}
|
||||
|
||||
// isAllHTTP returns true if all sb keys explicitly specify
|
||||
// the http:// scheme
|
||||
func (sb serverBlock) isAllHTTP() bool {
|
||||
return !slices.ContainsFunc(sb.parsedKeys, func(addr Address) bool {
|
||||
return addr.Scheme != "http"
|
||||
})
|
||||
}
|
||||
|
||||
// Positional are the supported modes for ordering directives.
|
||||
type Positional string
|
||||
|
||||
const (
|
||||
Before Positional = "before"
|
||||
After Positional = "after"
|
||||
First Positional = "first"
|
||||
Last Positional = "last"
|
||||
)
|
||||
|
||||
type (
|
||||
// UnmarshalFunc is a function which can unmarshal Caddyfile
|
||||
// tokens into zero or more config values using a Helper type.
|
||||
|
@ -462,6 +627,14 @@ type (
|
|||
// for you. These are passed to a call to
|
||||
// RegisterHandlerDirective.
|
||||
UnmarshalHandlerFunc func(h Helper) (caddyhttp.MiddlewareHandler, error)
|
||||
|
||||
// UnmarshalGlobalFunc is a function which can unmarshal Caddyfile
|
||||
// tokens from a global option. It is passed the tokens to parse and
|
||||
// existing value from the previous instance of this global option
|
||||
// (if any). It returns the value to associate with this global option.
|
||||
UnmarshalGlobalFunc func(d *caddyfile.Dispenser, existingVal any) (any, error)
|
||||
)
|
||||
|
||||
var registeredDirectives = make(map[string]UnmarshalFunc)
|
||||
|
||||
var registeredGlobalOptions = make(map[string]UnmarshalGlobalFunc)
|
||||
|
|
|
@ -31,20 +31,23 @@ func TestHostsFromKeys(t *testing.T) {
|
|||
[]Address{
|
||||
{Original: ":2015", Port: "2015"},
|
||||
},
|
||||
[]string{}, []string{},
|
||||
[]string{},
|
||||
[]string{},
|
||||
},
|
||||
{
|
||||
[]Address{
|
||||
{Original: ":443", Port: "443"},
|
||||
},
|
||||
[]string{}, []string{},
|
||||
[]string{},
|
||||
[]string{},
|
||||
},
|
||||
{
|
||||
[]Address{
|
||||
{Original: "foo", Host: "foo"},
|
||||
{Original: ":2015", Port: "2015"},
|
||||
},
|
||||
[]string{}, []string{"foo"},
|
||||
[]string{},
|
||||
[]string{"foo"},
|
||||
},
|
||||
{
|
||||
[]Address{
|
||||
|
@ -75,7 +78,7 @@ func TestHostsFromKeys(t *testing.T) {
|
|||
[]string{"example.com:2015"},
|
||||
},
|
||||
} {
|
||||
sb := serverBlock{keys: tc.keys}
|
||||
sb := serverBlock{parsedKeys: tc.keys}
|
||||
|
||||
// test in normal mode
|
||||
actual := sb.hostsFromKeys(false)
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -9,7 +9,6 @@ import (
|
|||
func TestMatcherSyntax(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
expectWarn bool
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
|
@ -18,7 +17,6 @@ func TestMatcherSyntax(t *testing.T) {
|
|||
query showdebug=1
|
||||
}
|
||||
`,
|
||||
expectWarn: false,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
|
@ -27,7 +25,6 @@ func TestMatcherSyntax(t *testing.T) {
|
|||
query bad format
|
||||
}
|
||||
`,
|
||||
expectWarn: false,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
|
@ -38,7 +35,6 @@ func TestMatcherSyntax(t *testing.T) {
|
|||
}
|
||||
}
|
||||
`,
|
||||
expectWarn: false,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
|
@ -47,21 +43,29 @@ func TestMatcherSyntax(t *testing.T) {
|
|||
not path /somepath*
|
||||
}
|
||||
`,
|
||||
expectWarn: false,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `http://localhost
|
||||
@debug not path /somepath*
|
||||
`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `@matcher {
|
||||
path /matcher-not-allowed/outside-of-site-block/*
|
||||
}
|
||||
http://localhost
|
||||
`,
|
||||
expectError: true,
|
||||
},
|
||||
} {
|
||||
|
||||
adapter := caddyfile.Adapter{
|
||||
ServerType: ServerType{},
|
||||
}
|
||||
|
||||
_, warnings, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if len(warnings) > 0 != tc.expectWarn {
|
||||
t.Errorf("Test %d warning expectation failed Expected: %v, got %v", i, tc.expectWarn, warnings)
|
||||
continue
|
||||
}
|
||||
_, _, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if err != nil != tc.expectError {
|
||||
t.Errorf("Test %d error expectation failed Expected: %v, got %s", i, tc.expectError, err)
|
||||
|
@ -103,7 +107,6 @@ func TestSpecificity(t *testing.T) {
|
|||
func TestGlobalOptions(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
expectWarn bool
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
|
@ -113,7 +116,6 @@ func TestGlobalOptions(t *testing.T) {
|
|||
}
|
||||
:80
|
||||
`,
|
||||
expectWarn: false,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
|
@ -123,7 +125,6 @@ func TestGlobalOptions(t *testing.T) {
|
|||
}
|
||||
:80
|
||||
`,
|
||||
expectWarn: false,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
|
@ -133,7 +134,6 @@ func TestGlobalOptions(t *testing.T) {
|
|||
}
|
||||
:80
|
||||
`,
|
||||
expectWarn: false,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
|
@ -145,7 +145,54 @@ func TestGlobalOptions(t *testing.T) {
|
|||
}
|
||||
:80
|
||||
`,
|
||||
expectWarn: false,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
input: `
|
||||
{
|
||||
admin {
|
||||
enforce_origin
|
||||
origins 192.168.1.1:2020 127.0.0.1:2020
|
||||
}
|
||||
}
|
||||
:80
|
||||
`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `
|
||||
{
|
||||
admin 127.0.0.1:2020 {
|
||||
enforce_origin
|
||||
origins 192.168.1.1:2020 127.0.0.1:2020
|
||||
}
|
||||
}
|
||||
:80
|
||||
`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `
|
||||
{
|
||||
admin 192.168.1.1:2020 127.0.0.1:2020 {
|
||||
enforce_origin
|
||||
origins 192.168.1.1:2020 127.0.0.1:2020
|
||||
}
|
||||
}
|
||||
:80
|
||||
`,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
input: `
|
||||
{
|
||||
admin off {
|
||||
enforce_origin
|
||||
origins 192.168.1.1:2020 127.0.0.1:2020
|
||||
}
|
||||
}
|
||||
:80
|
||||
`,
|
||||
expectError: true,
|
||||
},
|
||||
} {
|
||||
|
@ -154,12 +201,7 @@ func TestGlobalOptions(t *testing.T) {
|
|||
ServerType: ServerType{},
|
||||
}
|
||||
|
||||
_, warnings, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if len(warnings) > 0 != tc.expectWarn {
|
||||
t.Errorf("Test %d warning expectation failed Expected: %v, got %v", i, tc.expectWarn, warnings)
|
||||
continue
|
||||
}
|
||||
_, _, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if err != nil != tc.expectError {
|
||||
t.Errorf("Test %d error expectation failed Expected: %v, got %s", i, tc.expectError, err)
|
||||
|
|
|
@ -15,156 +15,302 @@
|
|||
package httpcaddyfile
|
||||
|
||||
import (
|
||||
"slices"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/caddyserver/certmagic"
|
||||
"github.com/mholt/acmez/v2/acme"
|
||||
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
|
||||
"github.com/caddyserver/caddy/v2/modules/caddyhttp"
|
||||
"github.com/caddyserver/caddy/v2/modules/caddytls"
|
||||
)
|
||||
|
||||
func parseOptHTTPPort(d *caddyfile.Dispenser) (int, error) {
|
||||
func init() {
|
||||
RegisterGlobalOption("debug", parseOptTrue)
|
||||
RegisterGlobalOption("http_port", parseOptHTTPPort)
|
||||
RegisterGlobalOption("https_port", parseOptHTTPSPort)
|
||||
RegisterGlobalOption("default_bind", parseOptDefaultBind)
|
||||
RegisterGlobalOption("grace_period", parseOptDuration)
|
||||
RegisterGlobalOption("shutdown_delay", parseOptDuration)
|
||||
RegisterGlobalOption("default_sni", parseOptSingleString)
|
||||
RegisterGlobalOption("fallback_sni", parseOptSingleString)
|
||||
RegisterGlobalOption("order", parseOptOrder)
|
||||
RegisterGlobalOption("storage", parseOptStorage)
|
||||
RegisterGlobalOption("storage_check", parseStorageCheck)
|
||||
RegisterGlobalOption("storage_clean_interval", parseStorageCleanInterval)
|
||||
RegisterGlobalOption("renew_interval", parseOptDuration)
|
||||
RegisterGlobalOption("ocsp_interval", parseOptDuration)
|
||||
RegisterGlobalOption("acme_ca", parseOptSingleString)
|
||||
RegisterGlobalOption("acme_ca_root", parseOptSingleString)
|
||||
RegisterGlobalOption("acme_dns", parseOptACMEDNS)
|
||||
RegisterGlobalOption("acme_eab", parseOptACMEEAB)
|
||||
RegisterGlobalOption("cert_issuer", parseOptCertIssuer)
|
||||
RegisterGlobalOption("skip_install_trust", parseOptTrue)
|
||||
RegisterGlobalOption("email", parseOptSingleString)
|
||||
RegisterGlobalOption("admin", parseOptAdmin)
|
||||
RegisterGlobalOption("on_demand_tls", parseOptOnDemand)
|
||||
RegisterGlobalOption("local_certs", parseOptTrue)
|
||||
RegisterGlobalOption("key_type", parseOptSingleString)
|
||||
RegisterGlobalOption("auto_https", parseOptAutoHTTPS)
|
||||
RegisterGlobalOption("metrics", parseMetricsOptions)
|
||||
RegisterGlobalOption("servers", parseServerOptions)
|
||||
RegisterGlobalOption("ocsp_stapling", parseOCSPStaplingOptions)
|
||||
RegisterGlobalOption("cert_lifetime", parseOptDuration)
|
||||
RegisterGlobalOption("log", parseLogOptions)
|
||||
RegisterGlobalOption("preferred_chains", parseOptPreferredChains)
|
||||
RegisterGlobalOption("persist_config", parseOptPersistConfig)
|
||||
}
|
||||
|
||||
func parseOptTrue(d *caddyfile.Dispenser, _ any) (any, error) { return true, nil }
|
||||
|
||||
func parseOptHTTPPort(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
var httpPort int
|
||||
for d.Next() {
|
||||
var httpPortStr string
|
||||
if !d.AllArgs(&httpPortStr) {
|
||||
return 0, d.ArgErr()
|
||||
}
|
||||
var err error
|
||||
httpPort, err = strconv.Atoi(httpPortStr)
|
||||
if err != nil {
|
||||
return 0, d.Errf("converting port '%s' to integer value: %v", httpPortStr, err)
|
||||
}
|
||||
var httpPortStr string
|
||||
if !d.AllArgs(&httpPortStr) {
|
||||
return 0, d.ArgErr()
|
||||
}
|
||||
var err error
|
||||
httpPort, err = strconv.Atoi(httpPortStr)
|
||||
if err != nil {
|
||||
return 0, d.Errf("converting port '%s' to integer value: %v", httpPortStr, err)
|
||||
}
|
||||
return httpPort, nil
|
||||
}
|
||||
|
||||
func parseOptHTTPSPort(d *caddyfile.Dispenser) (int, error) {
|
||||
func parseOptHTTPSPort(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
var httpsPort int
|
||||
for d.Next() {
|
||||
var httpsPortStr string
|
||||
if !d.AllArgs(&httpsPortStr) {
|
||||
return 0, d.ArgErr()
|
||||
}
|
||||
var err error
|
||||
httpsPort, err = strconv.Atoi(httpsPortStr)
|
||||
if err != nil {
|
||||
return 0, d.Errf("converting port '%s' to integer value: %v", httpsPortStr, err)
|
||||
}
|
||||
var httpsPortStr string
|
||||
if !d.AllArgs(&httpsPortStr) {
|
||||
return 0, d.ArgErr()
|
||||
}
|
||||
var err error
|
||||
httpsPort, err = strconv.Atoi(httpsPortStr)
|
||||
if err != nil {
|
||||
return 0, d.Errf("converting port '%s' to integer value: %v", httpsPortStr, err)
|
||||
}
|
||||
return httpsPort, nil
|
||||
}
|
||||
|
||||
func parseOptExperimentalHTTP3(d *caddyfile.Dispenser) (bool, error) {
|
||||
return true, nil
|
||||
}
|
||||
func parseOptOrder(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
|
||||
func parseOptOrder(d *caddyfile.Dispenser) ([]string, error) {
|
||||
newOrder := directiveOrder
|
||||
// get directive name
|
||||
if !d.Next() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dirName := d.Val()
|
||||
if _, ok := registeredDirectives[dirName]; !ok {
|
||||
return nil, d.Errf("%s is not a registered directive", dirName)
|
||||
}
|
||||
|
||||
for d.Next() {
|
||||
// get directive name
|
||||
if !d.Next() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dirName := d.Val()
|
||||
if _, ok := registeredDirectives[dirName]; !ok {
|
||||
return nil, d.Errf("%s is not a registered directive", dirName)
|
||||
}
|
||||
// get positional token
|
||||
if !d.Next() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pos := Positional(d.Val())
|
||||
|
||||
// get positional token
|
||||
if !d.Next() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pos := d.Val()
|
||||
// if directive already had an order, drop it
|
||||
newOrder := slices.DeleteFunc(directiveOrder, func(d string) bool {
|
||||
return d == dirName
|
||||
})
|
||||
|
||||
// if directive exists, first remove it
|
||||
for i, d := range newOrder {
|
||||
if d == dirName {
|
||||
newOrder = append(newOrder[:i], newOrder[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// act on the positional
|
||||
switch pos {
|
||||
case "first":
|
||||
newOrder = append([]string{dirName}, newOrder...)
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
directiveOrder = newOrder
|
||||
return newOrder, nil
|
||||
case "last":
|
||||
newOrder = append(newOrder, dirName)
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
directiveOrder = newOrder
|
||||
return newOrder, nil
|
||||
case "before":
|
||||
case "after":
|
||||
default:
|
||||
return nil, d.Errf("unknown positional '%s'", pos)
|
||||
}
|
||||
|
||||
// get name of other directive
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
otherDir := d.Val()
|
||||
// act on the positional; if it's First or Last, we're done right away
|
||||
switch pos {
|
||||
case First:
|
||||
newOrder = append([]string{dirName}, newOrder...)
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
directiveOrder = newOrder
|
||||
return newOrder, nil
|
||||
|
||||
// insert directive into proper position
|
||||
for i, d := range newOrder {
|
||||
if d == otherDir {
|
||||
if pos == "before" {
|
||||
newOrder = append(newOrder[:i], append([]string{dirName}, newOrder[i:]...)...)
|
||||
} else if pos == "after" {
|
||||
newOrder = append(newOrder[:i+1], append([]string{dirName}, newOrder[i+1:]...)...)
|
||||
}
|
||||
break
|
||||
}
|
||||
case Last:
|
||||
newOrder = append(newOrder, dirName)
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
directiveOrder = newOrder
|
||||
return newOrder, nil
|
||||
|
||||
// if it's Before or After, continue
|
||||
case Before:
|
||||
case After:
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unknown positional '%s'", pos)
|
||||
}
|
||||
|
||||
// get name of other directive
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
otherDir := d.Val()
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
|
||||
// get the position of the target directive
|
||||
targetIndex := slices.Index(newOrder, otherDir)
|
||||
if targetIndex == -1 {
|
||||
return nil, d.Errf("directive '%s' not found", otherDir)
|
||||
}
|
||||
// if we're inserting after, we need to increment the index to go after
|
||||
if pos == After {
|
||||
targetIndex++
|
||||
}
|
||||
// insert the directive into the new order
|
||||
newOrder = slices.Insert(newOrder, targetIndex, dirName)
|
||||
|
||||
directiveOrder = newOrder
|
||||
|
||||
return newOrder, nil
|
||||
}
|
||||
|
||||
func parseOptStorage(d *caddyfile.Dispenser) (caddy.StorageConverter, error) {
|
||||
if !d.Next() {
|
||||
func parseOptStorage(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
if !d.Next() { // consume option name
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
args := d.RemainingArgs()
|
||||
if len(args) != 1 {
|
||||
if !d.Next() { // get storage module name
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
modName := args[0]
|
||||
mod, err := caddy.GetModule("caddy.storage." + modName)
|
||||
if err != nil {
|
||||
return nil, d.Errf("getting storage module '%s': %v", modName, err)
|
||||
}
|
||||
unm, ok := mod.New().(caddyfile.Unmarshaler)
|
||||
if !ok {
|
||||
return nil, d.Errf("storage module '%s' is not a Caddyfile unmarshaler", mod.ID)
|
||||
}
|
||||
err = unm.UnmarshalCaddyfile(d.NewFromNextSegment())
|
||||
modID := "caddy.storage." + d.Val()
|
||||
unm, err := caddyfile.UnmarshalModule(d, modID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
storage, ok := unm.(caddy.StorageConverter)
|
||||
if !ok {
|
||||
return nil, d.Errf("module %s is not a StorageConverter", mod.ID)
|
||||
return nil, d.Errf("module %s is not a caddy.StorageConverter", modID)
|
||||
}
|
||||
return storage, nil
|
||||
}
|
||||
|
||||
func parseOptSingleString(d *caddyfile.Dispenser) (string, error) {
|
||||
d.Next() // consume parameter name
|
||||
func parseStorageCheck(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
if !d.Next() {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
val := d.Val()
|
||||
if d.Next() {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
if val != "off" {
|
||||
return "", d.Errf("storage_check must be 'off'")
|
||||
}
|
||||
return val, nil
|
||||
}
|
||||
|
||||
func parseStorageCleanInterval(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
if !d.Next() {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
val := d.Val()
|
||||
if d.Next() {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
if val == "off" {
|
||||
return false, nil
|
||||
}
|
||||
dur, err := caddy.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, d.Errf("failed to parse storage_clean_interval, must be a duration or 'off' %w", err)
|
||||
}
|
||||
return caddy.Duration(dur), nil
|
||||
}
|
||||
|
||||
func parseOptDuration(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
if !d.Next() { // consume option name
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
if !d.Next() { // get duration value
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dur, err := caddy.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return caddy.Duration(dur), nil
|
||||
}
|
||||
|
||||
func parseOptACMEDNS(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
if !d.Next() { // consume option name
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
if !d.Next() { // get DNS module name
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
modID := "dns.providers." + d.Val()
|
||||
unm, err := caddyfile.UnmarshalModule(d, modID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
prov, ok := unm.(certmagic.DNSProvider)
|
||||
if !ok {
|
||||
return nil, d.Errf("module %s (%T) is not a certmagic.DNSProvider", modID, unm)
|
||||
}
|
||||
return prov, nil
|
||||
}
|
||||
|
||||
func parseOptACMEEAB(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
eab := new(acme.EAB)
|
||||
d.Next() // consume option name
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
for d.NextBlock(0) {
|
||||
switch d.Val() {
|
||||
case "key_id":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
eab.KeyID = d.Val()
|
||||
|
||||
case "mac_key":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
eab.MACKey = d.Val()
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized parameter '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
return eab, nil
|
||||
}
|
||||
|
||||
func parseOptCertIssuer(d *caddyfile.Dispenser, existing any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
|
||||
var issuers []certmagic.Issuer
|
||||
if existing != nil {
|
||||
issuers = existing.([]certmagic.Issuer)
|
||||
}
|
||||
|
||||
// get issuer module name
|
||||
if !d.Next() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
modID := "tls.issuance." + d.Val()
|
||||
unm, err := caddyfile.UnmarshalModule(d, modID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
iss, ok := unm.(certmagic.Issuer)
|
||||
if !ok {
|
||||
return nil, d.Errf("module %s (%T) is not a certmagic.Issuer", modID, unm)
|
||||
}
|
||||
issuers = append(issuers, iss)
|
||||
return issuers, nil
|
||||
}
|
||||
|
||||
func parseOptSingleString(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
if !d.Next() {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
|
@ -175,72 +321,123 @@ func parseOptSingleString(d *caddyfile.Dispenser) (string, error) {
|
|||
return val, nil
|
||||
}
|
||||
|
||||
func parseOptAdmin(d *caddyfile.Dispenser) (string, error) {
|
||||
if d.Next() {
|
||||
var listenAddress string
|
||||
if !d.AllArgs(&listenAddress) {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
if listenAddress == "" {
|
||||
listenAddress = caddy.DefaultAdminListen
|
||||
}
|
||||
return listenAddress, nil
|
||||
func parseOptDefaultBind(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
|
||||
var addresses, protocols []string
|
||||
addresses = d.RemainingArgs()
|
||||
|
||||
if len(addresses) == 0 {
|
||||
addresses = append(addresses, "")
|
||||
}
|
||||
return "", nil
|
||||
|
||||
for d.NextBlock(0) {
|
||||
switch d.Val() {
|
||||
case "protocols":
|
||||
protocols = d.RemainingArgs()
|
||||
if len(protocols) == 0 {
|
||||
return nil, d.Errf("protocols requires one or more arguments")
|
||||
}
|
||||
default:
|
||||
return nil, d.Errf("unknown subdirective: %s", d.Val())
|
||||
}
|
||||
}
|
||||
|
||||
return []ConfigValue{{Class: "bind", Value: addressesWithProtocols{
|
||||
addresses: addresses,
|
||||
protocols: protocols,
|
||||
}}}, nil
|
||||
}
|
||||
|
||||
func parseOptOnDemand(d *caddyfile.Dispenser) (*caddytls.OnDemandConfig, error) {
|
||||
var ond *caddytls.OnDemandConfig
|
||||
for d.Next() {
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
switch d.Val() {
|
||||
case "ask":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
if ond == nil {
|
||||
ond = new(caddytls.OnDemandConfig)
|
||||
}
|
||||
ond.Ask = d.Val()
|
||||
func parseOptAdmin(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
|
||||
case "interval":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dur, err := time.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ond == nil {
|
||||
ond = new(caddytls.OnDemandConfig)
|
||||
}
|
||||
if ond.RateLimit == nil {
|
||||
ond.RateLimit = new(caddytls.RateLimit)
|
||||
}
|
||||
ond.RateLimit.Interval = caddy.Duration(dur)
|
||||
|
||||
case "burst":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
burst, err := strconv.Atoi(d.Val())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ond == nil {
|
||||
ond = new(caddytls.OnDemandConfig)
|
||||
}
|
||||
if ond.RateLimit == nil {
|
||||
ond.RateLimit = new(caddytls.RateLimit)
|
||||
}
|
||||
ond.RateLimit.Burst = burst
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized parameter '%s'", d.Val())
|
||||
adminCfg := new(caddy.AdminConfig)
|
||||
if d.NextArg() {
|
||||
listenAddress := d.Val()
|
||||
if listenAddress == "off" {
|
||||
adminCfg.Disabled = true
|
||||
if d.Next() { // Do not accept any remaining options including block
|
||||
return nil, d.Err("No more option is allowed after turning off admin config")
|
||||
}
|
||||
} else {
|
||||
adminCfg.Listen = listenAddress
|
||||
if d.NextArg() { // At most 1 arg is allowed
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
}
|
||||
}
|
||||
for d.NextBlock(0) {
|
||||
switch d.Val() {
|
||||
case "enforce_origin":
|
||||
adminCfg.EnforceOrigin = true
|
||||
|
||||
case "origins":
|
||||
adminCfg.Origins = d.RemainingArgs()
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized parameter '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
if adminCfg.Listen == "" && !adminCfg.Disabled {
|
||||
adminCfg.Listen = caddy.DefaultAdminListen
|
||||
}
|
||||
return adminCfg, nil
|
||||
}
|
||||
|
||||
func parseOptOnDemand(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
|
||||
var ond *caddytls.OnDemandConfig
|
||||
|
||||
for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
switch d.Val() {
|
||||
case "ask":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
if ond == nil {
|
||||
ond = new(caddytls.OnDemandConfig)
|
||||
}
|
||||
if ond.PermissionRaw != nil {
|
||||
return nil, d.Err("on-demand TLS permission module (or 'ask') already specified")
|
||||
}
|
||||
perm := caddytls.PermissionByHTTP{Endpoint: d.Val()}
|
||||
ond.PermissionRaw = caddyconfig.JSONModuleObject(perm, "module", "http", nil)
|
||||
|
||||
case "permission":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
if ond == nil {
|
||||
ond = new(caddytls.OnDemandConfig)
|
||||
}
|
||||
if ond.PermissionRaw != nil {
|
||||
return nil, d.Err("on-demand TLS permission module (or 'ask') already specified")
|
||||
}
|
||||
modName := d.Val()
|
||||
modID := "tls.permission." + modName
|
||||
unm, err := caddyfile.UnmarshalModule(d, modID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
perm, ok := unm.(caddytls.OnDemandPermission)
|
||||
if !ok {
|
||||
return nil, d.Errf("module %s (%T) is not an on-demand TLS permission module", modID, unm)
|
||||
}
|
||||
ond.PermissionRaw = caddyconfig.JSONModuleObject(perm, "module", modName, nil)
|
||||
|
||||
case "interval":
|
||||
return nil, d.Errf("the on_demand_tls 'interval' option is no longer supported, remove it from your config")
|
||||
|
||||
case "burst":
|
||||
return nil, d.Errf("the on_demand_tls 'burst' option is no longer supported, remove it from your config")
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized parameter '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
if ond == nil {
|
||||
|
@ -248,3 +445,128 @@ func parseOptOnDemand(d *caddyfile.Dispenser) (*caddytls.OnDemandConfig, error)
|
|||
}
|
||||
return ond, nil
|
||||
}
|
||||
|
||||
func parseOptPersistConfig(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
if !d.Next() {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
val := d.Val()
|
||||
if d.Next() {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
if val != "off" {
|
||||
return "", d.Errf("persist_config must be 'off'")
|
||||
}
|
||||
return val, nil
|
||||
}
|
||||
|
||||
func parseOptAutoHTTPS(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
val := d.RemainingArgs()
|
||||
if len(val) == 0 {
|
||||
return "", d.ArgErr()
|
||||
}
|
||||
for _, v := range val {
|
||||
switch v {
|
||||
case "off":
|
||||
case "disable_redirects":
|
||||
case "disable_certs":
|
||||
case "ignore_loaded_certs":
|
||||
case "prefer_wildcard":
|
||||
break
|
||||
|
||||
default:
|
||||
return "", d.Errf("auto_https must be one of 'off', 'disable_redirects', 'disable_certs', 'ignore_loaded_certs', or 'prefer_wildcard'")
|
||||
}
|
||||
}
|
||||
return val, nil
|
||||
}
|
||||
|
||||
func unmarshalCaddyfileMetricsOptions(d *caddyfile.Dispenser) (any, error) {
|
||||
d.Next() // consume option name
|
||||
metrics := new(caddyhttp.Metrics)
|
||||
for d.NextBlock(0) {
|
||||
switch d.Val() {
|
||||
case "per_host":
|
||||
metrics.PerHost = true
|
||||
default:
|
||||
return nil, d.Errf("unrecognized servers option '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
return metrics, nil
|
||||
}
|
||||
|
||||
func parseMetricsOptions(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
return unmarshalCaddyfileMetricsOptions(d)
|
||||
}
|
||||
|
||||
func parseServerOptions(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
return unmarshalCaddyfileServerOptions(d)
|
||||
}
|
||||
|
||||
func parseOCSPStaplingOptions(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next() // consume option name
|
||||
var val string
|
||||
if !d.AllArgs(&val) {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
if val != "off" {
|
||||
return nil, d.Errf("invalid argument '%s'", val)
|
||||
}
|
||||
return certmagic.OCSPConfig{
|
||||
DisableStapling: val == "off",
|
||||
}, nil
|
||||
}
|
||||
|
||||
// parseLogOptions parses the global log option. Syntax:
|
||||
//
|
||||
// log [name] {
|
||||
// output <writer_module> ...
|
||||
// format <encoder_module> ...
|
||||
// level <level>
|
||||
// include <namespaces...>
|
||||
// exclude <namespaces...>
|
||||
// }
|
||||
//
|
||||
// When the name argument is unspecified, this directive modifies the default
|
||||
// logger.
|
||||
func parseLogOptions(d *caddyfile.Dispenser, existingVal any) (any, error) {
|
||||
currentNames := make(map[string]struct{})
|
||||
if existingVal != nil {
|
||||
innerVals, ok := existingVal.([]ConfigValue)
|
||||
if !ok {
|
||||
return nil, d.Errf("existing log values of unexpected type: %T", existingVal)
|
||||
}
|
||||
for _, rawVal := range innerVals {
|
||||
val, ok := rawVal.Value.(namedCustomLog)
|
||||
if !ok {
|
||||
return nil, d.Errf("existing log value of unexpected type: %T", existingVal)
|
||||
}
|
||||
currentNames[val.name] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
var warnings []caddyconfig.Warning
|
||||
// Call out the same parser that handles server-specific log configuration.
|
||||
configValues, err := parseLogHelper(
|
||||
Helper{
|
||||
Dispenser: d,
|
||||
warnings: &warnings,
|
||||
},
|
||||
currentNames,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(warnings) > 0 {
|
||||
return nil, d.Errf("warnings found in parsing global log options: %+v", warnings)
|
||||
}
|
||||
|
||||
return configValues, nil
|
||||
}
|
||||
|
||||
func parseOptPreferredChains(d *caddyfile.Dispenser, _ any) (any, error) {
|
||||
d.Next()
|
||||
return caddytls.ParseCaddyfilePreferredChainsOptions(d)
|
||||
}
|
||||
|
|
64
caddyconfig/httpcaddyfile/options_test.go
Normal file
64
caddyconfig/httpcaddyfile/options_test.go
Normal file
|
@ -0,0 +1,64 @@
|
|||
package httpcaddyfile
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
|
||||
_ "github.com/caddyserver/caddy/v2/modules/logging"
|
||||
)
|
||||
|
||||
func TestGlobalLogOptionSyntax(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
input string
|
||||
output string
|
||||
expectError bool
|
||||
}{
|
||||
// NOTE: Additional test cases of successful Caddyfile parsing
|
||||
// are present in: caddytest/integration/caddyfile_adapt/
|
||||
{
|
||||
input: `{
|
||||
log default
|
||||
}
|
||||
`,
|
||||
output: `{}`,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
input: `{
|
||||
log example {
|
||||
output file foo.log
|
||||
}
|
||||
log example {
|
||||
format json
|
||||
}
|
||||
}
|
||||
`,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
input: `{
|
||||
log example /foo {
|
||||
output file foo.log
|
||||
}
|
||||
}
|
||||
`,
|
||||
expectError: true,
|
||||
},
|
||||
} {
|
||||
|
||||
adapter := caddyfile.Adapter{
|
||||
ServerType: ServerType{},
|
||||
}
|
||||
|
||||
out, _, err := adapter.Adapt([]byte(tc.input), nil)
|
||||
|
||||
if err != nil != tc.expectError {
|
||||
t.Errorf("Test %d error expectation failed Expected: %v, got %v", i, tc.expectError, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if string(out) != tc.output {
|
||||
t.Errorf("Test %d error output mismatch Expected: %s, got %s", i, tc.output, out)
|
||||
}
|
||||
}
|
||||
}
|
229
caddyconfig/httpcaddyfile/pkiapp.go
Normal file
229
caddyconfig/httpcaddyfile/pkiapp.go
Normal file
|
@ -0,0 +1,229 @@
|
|||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package httpcaddyfile
|
||||
|
||||
import (
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
|
||||
"github.com/caddyserver/caddy/v2/modules/caddypki"
|
||||
)
|
||||
|
||||
func init() {
|
||||
RegisterGlobalOption("pki", parsePKIApp)
|
||||
}
|
||||
|
||||
// parsePKIApp parses the global log option. Syntax:
|
||||
//
|
||||
// pki {
|
||||
// ca [<id>] {
|
||||
// name <name>
|
||||
// root_cn <name>
|
||||
// intermediate_cn <name>
|
||||
// intermediate_lifetime <duration>
|
||||
// root {
|
||||
// cert <path>
|
||||
// key <path>
|
||||
// format <format>
|
||||
// }
|
||||
// intermediate {
|
||||
// cert <path>
|
||||
// key <path>
|
||||
// format <format>
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// When the CA ID is unspecified, 'local' is assumed.
|
||||
func parsePKIApp(d *caddyfile.Dispenser, existingVal any) (any, error) {
|
||||
d.Next() // consume app name
|
||||
|
||||
pki := &caddypki.PKI{
|
||||
CAs: make(map[string]*caddypki.CA),
|
||||
}
|
||||
for d.NextBlock(0) {
|
||||
switch d.Val() {
|
||||
case "ca":
|
||||
pkiCa := new(caddypki.CA)
|
||||
if d.NextArg() {
|
||||
pkiCa.ID = d.Val()
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
}
|
||||
if pkiCa.ID == "" {
|
||||
pkiCa.ID = caddypki.DefaultCAID
|
||||
}
|
||||
|
||||
for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
switch d.Val() {
|
||||
case "name":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.Name = d.Val()
|
||||
|
||||
case "root_cn":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.RootCommonName = d.Val()
|
||||
|
||||
case "intermediate_cn":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.IntermediateCommonName = d.Val()
|
||||
|
||||
case "intermediate_lifetime":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dur, err := caddy.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pkiCa.IntermediateLifetime = caddy.Duration(dur)
|
||||
|
||||
case "root":
|
||||
if pkiCa.Root == nil {
|
||||
pkiCa.Root = new(caddypki.KeyPair)
|
||||
}
|
||||
for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
switch d.Val() {
|
||||
case "cert":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.Root.Certificate = d.Val()
|
||||
|
||||
case "key":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.Root.PrivateKey = d.Val()
|
||||
|
||||
case "format":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.Root.Format = d.Val()
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized pki ca root option '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
|
||||
case "intermediate":
|
||||
if pkiCa.Intermediate == nil {
|
||||
pkiCa.Intermediate = new(caddypki.KeyPair)
|
||||
}
|
||||
for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
switch d.Val() {
|
||||
case "cert":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.Intermediate.Certificate = d.Val()
|
||||
|
||||
case "key":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.Intermediate.PrivateKey = d.Val()
|
||||
|
||||
case "format":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
pkiCa.Intermediate.Format = d.Val()
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized pki ca intermediate option '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized pki ca option '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
|
||||
pki.CAs[pkiCa.ID] = pkiCa
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized pki option '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
return pki, nil
|
||||
}
|
||||
|
||||
func (st ServerType) buildPKIApp(
|
||||
pairings []sbAddrAssociation,
|
||||
options map[string]any,
|
||||
warnings []caddyconfig.Warning,
|
||||
) (*caddypki.PKI, []caddyconfig.Warning, error) {
|
||||
skipInstallTrust := false
|
||||
if _, ok := options["skip_install_trust"]; ok {
|
||||
skipInstallTrust = true
|
||||
}
|
||||
falseBool := false
|
||||
|
||||
// Load the PKI app configured via global options
|
||||
var pkiApp *caddypki.PKI
|
||||
unwrappedPki, ok := options["pki"].(*caddypki.PKI)
|
||||
if ok {
|
||||
pkiApp = unwrappedPki
|
||||
} else {
|
||||
pkiApp = &caddypki.PKI{CAs: make(map[string]*caddypki.CA)}
|
||||
}
|
||||
for _, ca := range pkiApp.CAs {
|
||||
if skipInstallTrust {
|
||||
ca.InstallTrust = &falseBool
|
||||
}
|
||||
pkiApp.CAs[ca.ID] = ca
|
||||
}
|
||||
|
||||
// Add in the CAs configured via directives
|
||||
for _, p := range pairings {
|
||||
for _, sblock := range p.serverBlocks {
|
||||
// find all the CAs that were defined and add them to the app config
|
||||
// i.e. from any "acme_server" directives
|
||||
for _, caCfgValue := range sblock.pile["pki.ca"] {
|
||||
ca := caCfgValue.Value.(*caddypki.CA)
|
||||
if skipInstallTrust {
|
||||
ca.InstallTrust = &falseBool
|
||||
}
|
||||
|
||||
// the CA might already exist from global options, so
|
||||
// don't overwrite it in that case
|
||||
if _, ok := pkiApp.CAs[ca.ID]; !ok {
|
||||
pkiApp.CAs[ca.ID] = ca
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// if there was no CAs defined in any of the servers,
|
||||
// and we were requested to not install trust, then
|
||||
// add one for the default/local CA to do so
|
||||
if len(pkiApp.CAs) == 0 && skipInstallTrust {
|
||||
ca := new(caddypki.CA)
|
||||
ca.ID = caddypki.DefaultCAID
|
||||
ca.InstallTrust = &falseBool
|
||||
pkiApp.CAs[ca.ID] = ca
|
||||
}
|
||||
|
||||
return pkiApp, warnings, nil
|
||||
}
|
344
caddyconfig/httpcaddyfile/serveroptions.go
Normal file
344
caddyconfig/httpcaddyfile/serveroptions.go
Normal file
|
@ -0,0 +1,344 @@
|
|||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package httpcaddyfile
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"slices"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
|
||||
"github.com/caddyserver/caddy/v2/modules/caddyhttp"
|
||||
)
|
||||
|
||||
// serverOptions collects server config overrides parsed from Caddyfile global options
|
||||
type serverOptions struct {
|
||||
// If set, will only apply these options to servers that contain a
|
||||
// listener address that matches exactly. If empty, will apply to all
|
||||
// servers that were not already matched by another serverOptions.
|
||||
ListenerAddress string
|
||||
|
||||
// These will all map 1:1 to the caddyhttp.Server struct
|
||||
Name string
|
||||
ListenerWrappersRaw []json.RawMessage
|
||||
ReadTimeout caddy.Duration
|
||||
ReadHeaderTimeout caddy.Duration
|
||||
WriteTimeout caddy.Duration
|
||||
IdleTimeout caddy.Duration
|
||||
KeepAliveInterval caddy.Duration
|
||||
MaxHeaderBytes int
|
||||
EnableFullDuplex bool
|
||||
Protocols []string
|
||||
StrictSNIHost *bool
|
||||
TrustedProxiesRaw json.RawMessage
|
||||
TrustedProxiesStrict int
|
||||
ClientIPHeaders []string
|
||||
ShouldLogCredentials bool
|
||||
Metrics *caddyhttp.Metrics
|
||||
Trace bool // TODO: EXPERIMENTAL
|
||||
}
|
||||
|
||||
func unmarshalCaddyfileServerOptions(d *caddyfile.Dispenser) (any, error) {
|
||||
d.Next() // consume option name
|
||||
|
||||
serverOpts := serverOptions{}
|
||||
if d.NextArg() {
|
||||
serverOpts.ListenerAddress = d.Val()
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
}
|
||||
for d.NextBlock(0) {
|
||||
switch d.Val() {
|
||||
case "name":
|
||||
if serverOpts.ListenerAddress == "" {
|
||||
return nil, d.Errf("cannot set a name for a server without a listener address")
|
||||
}
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
serverOpts.Name = d.Val()
|
||||
|
||||
case "listener_wrappers":
|
||||
for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
modID := "caddy.listeners." + d.Val()
|
||||
unm, err := caddyfile.UnmarshalModule(d, modID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
listenerWrapper, ok := unm.(caddy.ListenerWrapper)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("module %s (%T) is not a listener wrapper", modID, unm)
|
||||
}
|
||||
jsonListenerWrapper := caddyconfig.JSONModuleObject(
|
||||
listenerWrapper,
|
||||
"wrapper",
|
||||
listenerWrapper.(caddy.Module).CaddyModule().ID.Name(),
|
||||
nil,
|
||||
)
|
||||
serverOpts.ListenerWrappersRaw = append(serverOpts.ListenerWrappersRaw, jsonListenerWrapper)
|
||||
}
|
||||
|
||||
case "timeouts":
|
||||
for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
switch d.Val() {
|
||||
case "read_body":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dur, err := caddy.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, d.Errf("parsing read_body timeout duration: %v", err)
|
||||
}
|
||||
serverOpts.ReadTimeout = caddy.Duration(dur)
|
||||
|
||||
case "read_header":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dur, err := caddy.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, d.Errf("parsing read_header timeout duration: %v", err)
|
||||
}
|
||||
serverOpts.ReadHeaderTimeout = caddy.Duration(dur)
|
||||
|
||||
case "write":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dur, err := caddy.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, d.Errf("parsing write timeout duration: %v", err)
|
||||
}
|
||||
serverOpts.WriteTimeout = caddy.Duration(dur)
|
||||
|
||||
case "idle":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dur, err := caddy.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, d.Errf("parsing idle timeout duration: %v", err)
|
||||
}
|
||||
serverOpts.IdleTimeout = caddy.Duration(dur)
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized timeouts option '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
case "keepalive_interval":
|
||||
if !d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
dur, err := caddy.ParseDuration(d.Val())
|
||||
if err != nil {
|
||||
return nil, d.Errf("parsing keepalive interval duration: %v", err)
|
||||
}
|
||||
serverOpts.KeepAliveInterval = caddy.Duration(dur)
|
||||
|
||||
case "max_header_size":
|
||||
var sizeStr string
|
||||
if !d.AllArgs(&sizeStr) {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
size, err := humanize.ParseBytes(sizeStr)
|
||||
if err != nil {
|
||||
return nil, d.Errf("parsing max_header_size: %v", err)
|
||||
}
|
||||
serverOpts.MaxHeaderBytes = int(size)
|
||||
|
||||
case "enable_full_duplex":
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
serverOpts.EnableFullDuplex = true
|
||||
|
||||
case "log_credentials":
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
serverOpts.ShouldLogCredentials = true
|
||||
|
||||
case "protocols":
|
||||
protos := d.RemainingArgs()
|
||||
for _, proto := range protos {
|
||||
if proto != "h1" && proto != "h2" && proto != "h2c" && proto != "h3" {
|
||||
return nil, d.Errf("unknown protocol '%s': expected h1, h2, h2c, or h3", proto)
|
||||
}
|
||||
if slices.Contains(serverOpts.Protocols, proto) {
|
||||
return nil, d.Errf("protocol %s specified more than once", proto)
|
||||
}
|
||||
serverOpts.Protocols = append(serverOpts.Protocols, proto)
|
||||
}
|
||||
if nesting := d.Nesting(); d.NextBlock(nesting) {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
|
||||
case "strict_sni_host":
|
||||
if d.NextArg() && d.Val() != "insecure_off" && d.Val() != "on" {
|
||||
return nil, d.Errf("strict_sni_host only supports 'on' or 'insecure_off', got '%s'", d.Val())
|
||||
}
|
||||
boolVal := true
|
||||
if d.Val() == "insecure_off" {
|
||||
boolVal = false
|
||||
}
|
||||
serverOpts.StrictSNIHost = &boolVal
|
||||
|
||||
case "trusted_proxies":
|
||||
if !d.NextArg() {
|
||||
return nil, d.Err("trusted_proxies expects an IP range source module name as its first argument")
|
||||
}
|
||||
modID := "http.ip_sources." + d.Val()
|
||||
unm, err := caddyfile.UnmarshalModule(d, modID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
source, ok := unm.(caddyhttp.IPRangeSource)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("module %s (%T) is not an IP range source", modID, unm)
|
||||
}
|
||||
jsonSource := caddyconfig.JSONModuleObject(
|
||||
source,
|
||||
"source",
|
||||
source.(caddy.Module).CaddyModule().ID.Name(),
|
||||
nil,
|
||||
)
|
||||
serverOpts.TrustedProxiesRaw = jsonSource
|
||||
|
||||
case "trusted_proxies_strict":
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
serverOpts.TrustedProxiesStrict = 1
|
||||
|
||||
case "client_ip_headers":
|
||||
headers := d.RemainingArgs()
|
||||
for _, header := range headers {
|
||||
if slices.Contains(serverOpts.ClientIPHeaders, header) {
|
||||
return nil, d.Errf("client IP header %s specified more than once", header)
|
||||
}
|
||||
serverOpts.ClientIPHeaders = append(serverOpts.ClientIPHeaders, header)
|
||||
}
|
||||
if nesting := d.Nesting(); d.NextBlock(nesting) {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
|
||||
case "metrics":
|
||||
caddy.Log().Warn("The nested 'metrics' option inside `servers` is deprecated and will be removed in the next major version. Use the global 'metrics' option instead.")
|
||||
serverOpts.Metrics = new(caddyhttp.Metrics)
|
||||
for nesting := d.Nesting(); d.NextBlock(nesting); {
|
||||
switch d.Val() {
|
||||
case "per_host":
|
||||
serverOpts.Metrics.PerHost = true
|
||||
}
|
||||
}
|
||||
|
||||
case "trace":
|
||||
if d.NextArg() {
|
||||
return nil, d.ArgErr()
|
||||
}
|
||||
serverOpts.Trace = true
|
||||
|
||||
default:
|
||||
return nil, d.Errf("unrecognized servers option '%s'", d.Val())
|
||||
}
|
||||
}
|
||||
return serverOpts, nil
|
||||
}
|
||||
|
||||
// applyServerOptions sets the server options on the appropriate servers
|
||||
func applyServerOptions(
|
||||
servers map[string]*caddyhttp.Server,
|
||||
options map[string]any,
|
||||
_ *[]caddyconfig.Warning,
|
||||
) error {
|
||||
serverOpts, ok := options["servers"].([]serverOptions)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
|
||||
// check for duplicate names, which would clobber the config
|
||||
existingNames := map[string]bool{}
|
||||
for _, opts := range serverOpts {
|
||||
if opts.Name == "" {
|
||||
continue
|
||||
}
|
||||
if existingNames[opts.Name] {
|
||||
return fmt.Errorf("cannot use duplicate server name '%s'", opts.Name)
|
||||
}
|
||||
existingNames[opts.Name] = true
|
||||
}
|
||||
|
||||
// collect the server name overrides
|
||||
nameReplacements := map[string]string{}
|
||||
|
||||
for key, server := range servers {
|
||||
// find the options that apply to this server
|
||||
optsIndex := slices.IndexFunc(serverOpts, func(s serverOptions) bool {
|
||||
return s.ListenerAddress == "" || slices.Contains(server.Listen, s.ListenerAddress)
|
||||
})
|
||||
|
||||
// if none apply, then move to the next server
|
||||
if optsIndex == -1 {
|
||||
continue
|
||||
}
|
||||
opts := serverOpts[optsIndex]
|
||||
|
||||
// set all the options
|
||||
server.ListenerWrappersRaw = opts.ListenerWrappersRaw
|
||||
server.ReadTimeout = opts.ReadTimeout
|
||||
server.ReadHeaderTimeout = opts.ReadHeaderTimeout
|
||||
server.WriteTimeout = opts.WriteTimeout
|
||||
server.IdleTimeout = opts.IdleTimeout
|
||||
server.KeepAliveInterval = opts.KeepAliveInterval
|
||||
server.MaxHeaderBytes = opts.MaxHeaderBytes
|
||||
server.EnableFullDuplex = opts.EnableFullDuplex
|
||||
server.Protocols = opts.Protocols
|
||||
server.StrictSNIHost = opts.StrictSNIHost
|
||||
server.TrustedProxiesRaw = opts.TrustedProxiesRaw
|
||||
server.ClientIPHeaders = opts.ClientIPHeaders
|
||||
server.TrustedProxiesStrict = opts.TrustedProxiesStrict
|
||||
server.Metrics = opts.Metrics
|
||||
if opts.ShouldLogCredentials {
|
||||
if server.Logs == nil {
|
||||
server.Logs = new(caddyhttp.ServerLogConfig)
|
||||
}
|
||||
server.Logs.ShouldLogCredentials = opts.ShouldLogCredentials
|
||||
}
|
||||
if opts.Trace {
|
||||
// TODO: THIS IS EXPERIMENTAL (MAY 2024)
|
||||
if server.Logs == nil {
|
||||
server.Logs = new(caddyhttp.ServerLogConfig)
|
||||
}
|
||||
server.Logs.Trace = opts.Trace
|
||||
}
|
||||
|
||||
if opts.Name != "" {
|
||||
nameReplacements[key] = opts.Name
|
||||
}
|
||||
}
|
||||
|
||||
// rename the servers if marked to do so
|
||||
for old, new := range nameReplacements {
|
||||
servers[new] = servers[old]
|
||||
delete(servers, old)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
102
caddyconfig/httpcaddyfile/shorthands.go
Normal file
102
caddyconfig/httpcaddyfile/shorthands.go
Normal file
|
@ -0,0 +1,102 @@
|
|||
package httpcaddyfile
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
|
||||
)
|
||||
|
||||
type ComplexShorthandReplacer struct {
|
||||
search *regexp.Regexp
|
||||
replace string
|
||||
}
|
||||
|
||||
type ShorthandReplacer struct {
|
||||
complex []ComplexShorthandReplacer
|
||||
simple *strings.Replacer
|
||||
}
|
||||
|
||||
func NewShorthandReplacer() ShorthandReplacer {
|
||||
// replace shorthand placeholders (which are convenient
|
||||
// when writing a Caddyfile) with their actual placeholder
|
||||
// identifiers or variable names
|
||||
replacer := strings.NewReplacer(placeholderShorthands()...)
|
||||
|
||||
// these are placeholders that allow a user-defined final
|
||||
// parameters, but we still want to provide a shorthand
|
||||
// for those, so we use a regexp to replace
|
||||
regexpReplacements := []ComplexShorthandReplacer{
|
||||
{regexp.MustCompile(`{header\.([\w-]*)}`), "{http.request.header.$1}"},
|
||||
{regexp.MustCompile(`{cookie\.([\w-]*)}`), "{http.request.cookie.$1}"},
|
||||
{regexp.MustCompile(`{labels\.([\w-]*)}`), "{http.request.host.labels.$1}"},
|
||||
{regexp.MustCompile(`{path\.([\w-]*)}`), "{http.request.uri.path.$1}"},
|
||||
{regexp.MustCompile(`{file\.([\w-]*)}`), "{http.request.uri.path.file.$1}"},
|
||||
{regexp.MustCompile(`{query\.([\w-]*)}`), "{http.request.uri.query.$1}"},
|
||||
{regexp.MustCompile(`{re\.([\w-\.]*)}`), "{http.regexp.$1}"},
|
||||
{regexp.MustCompile(`{vars\.([\w-]*)}`), "{http.vars.$1}"},
|
||||
{regexp.MustCompile(`{rp\.([\w-\.]*)}`), "{http.reverse_proxy.$1}"},
|
||||
{regexp.MustCompile(`{resp\.([\w-\.]*)}`), "{http.intercept.$1}"},
|
||||
{regexp.MustCompile(`{err\.([\w-\.]*)}`), "{http.error.$1}"},
|
||||
{regexp.MustCompile(`{file_match\.([\w-]*)}`), "{http.matchers.file.$1}"},
|
||||
}
|
||||
|
||||
return ShorthandReplacer{
|
||||
complex: regexpReplacements,
|
||||
simple: replacer,
|
||||
}
|
||||
}
|
||||
|
||||
// placeholderShorthands returns a slice of old-new string pairs,
|
||||
// where the left of the pair is a placeholder shorthand that may
|
||||
// be used in the Caddyfile, and the right is the replacement.
|
||||
func placeholderShorthands() []string {
|
||||
return []string{
|
||||
"{host}", "{http.request.host}",
|
||||
"{hostport}", "{http.request.hostport}",
|
||||
"{port}", "{http.request.port}",
|
||||
"{orig_method}", "{http.request.orig_method}",
|
||||
"{orig_uri}", "{http.request.orig_uri}",
|
||||
"{orig_path}", "{http.request.orig_uri.path}",
|
||||
"{orig_dir}", "{http.request.orig_uri.path.dir}",
|
||||
"{orig_file}", "{http.request.orig_uri.path.file}",
|
||||
"{orig_query}", "{http.request.orig_uri.query}",
|
||||
"{orig_?query}", "{http.request.orig_uri.prefixed_query}",
|
||||
"{method}", "{http.request.method}",
|
||||
"{uri}", "{http.request.uri}",
|
||||
"{path}", "{http.request.uri.path}",
|
||||
"{dir}", "{http.request.uri.path.dir}",
|
||||
"{file}", "{http.request.uri.path.file}",
|
||||
"{query}", "{http.request.uri.query}",
|
||||
"{?query}", "{http.request.uri.prefixed_query}",
|
||||
"{remote}", "{http.request.remote}",
|
||||
"{remote_host}", "{http.request.remote.host}",
|
||||
"{remote_port}", "{http.request.remote.port}",
|
||||
"{scheme}", "{http.request.scheme}",
|
||||
"{uuid}", "{http.request.uuid}",
|
||||
"{tls_cipher}", "{http.request.tls.cipher_suite}",
|
||||
"{tls_version}", "{http.request.tls.version}",
|
||||
"{tls_client_fingerprint}", "{http.request.tls.client.fingerprint}",
|
||||
"{tls_client_issuer}", "{http.request.tls.client.issuer}",
|
||||
"{tls_client_serial}", "{http.request.tls.client.serial}",
|
||||
"{tls_client_subject}", "{http.request.tls.client.subject}",
|
||||
"{tls_client_certificate_pem}", "{http.request.tls.client.certificate_pem}",
|
||||
"{tls_client_certificate_der_base64}", "{http.request.tls.client.certificate_der_base64}",
|
||||
"{upstream_hostport}", "{http.reverse_proxy.upstream.hostport}",
|
||||
"{client_ip}", "{http.vars.client_ip}",
|
||||
}
|
||||
}
|
||||
|
||||
// ApplyToSegment replaces shorthand placeholder to its full placeholder, understandable by Caddy.
|
||||
func (s ShorthandReplacer) ApplyToSegment(segment *caddyfile.Segment) {
|
||||
if segment != nil {
|
||||
for i := 0; i < len(*segment); i++ {
|
||||
// simple string replacements
|
||||
(*segment)[i].Text = s.simple.Replace((*segment)[i].Text)
|
||||
// complex regexp replacements
|
||||
for _, r := range s.complex {
|
||||
(*segment)[i].Text = r.search.ReplaceAllString((*segment)[i].Text, r.replace)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
9
caddyconfig/httpcaddyfile/testdata/import_variadic.txt
vendored
Normal file
9
caddyconfig/httpcaddyfile/testdata/import_variadic.txt
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
(t2) {
|
||||
respond 200 {
|
||||
body {args[:]}
|
||||
}
|
||||
}
|
||||
|
||||
:8082 {
|
||||
import t2 false
|
||||
}
|
9
caddyconfig/httpcaddyfile/testdata/import_variadic_snippet.txt
vendored
Normal file
9
caddyconfig/httpcaddyfile/testdata/import_variadic_snippet.txt
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
(t1) {
|
||||
respond 200 {
|
||||
body {args[:]}
|
||||
}
|
||||
}
|
||||
|
||||
:8081 {
|
||||
import t1 false
|
||||
}
|
15
caddyconfig/httpcaddyfile/testdata/import_variadic_with_import.txt
vendored
Normal file
15
caddyconfig/httpcaddyfile/testdata/import_variadic_with_import.txt
vendored
Normal file
|
@ -0,0 +1,15 @@
|
|||
(t1) {
|
||||
respond 200 {
|
||||
body {args[:]}
|
||||
}
|
||||
}
|
||||
|
||||
:8081 {
|
||||
import t1 false
|
||||
}
|
||||
|
||||
import import_variadic.txt
|
||||
|
||||
:8083 {
|
||||
import t2 true
|
||||
}
|
|
@ -19,53 +19,57 @@ import (
|
|||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"slices"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/caddyserver/certmagic"
|
||||
"github.com/mholt/acmez/v2/acme"
|
||||
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig"
|
||||
"github.com/caddyserver/caddy/v2/modules/caddyhttp"
|
||||
"github.com/caddyserver/caddy/v2/modules/caddytls"
|
||||
"github.com/caddyserver/certmagic"
|
||||
)
|
||||
|
||||
func (st ServerType) buildTLSApp(
|
||||
pairings []sbAddrAssociation,
|
||||
options map[string]interface{},
|
||||
options map[string]any,
|
||||
warnings []caddyconfig.Warning,
|
||||
) (*caddytls.TLS, []caddyconfig.Warning, error) {
|
||||
|
||||
tlsApp := &caddytls.TLS{CertificatesRaw: make(caddy.ModuleMap)}
|
||||
var certLoaders []caddytls.CertificateLoader
|
||||
|
||||
httpsPort := strconv.Itoa(caddyhttp.DefaultHTTPSPort)
|
||||
if hsp, ok := options["https_port"].(int); ok {
|
||||
httpsPort = strconv.Itoa(hsp)
|
||||
httpPort := strconv.Itoa(caddyhttp.DefaultHTTPPort)
|
||||
if hp, ok := options["http_port"].(int); ok {
|
||||
httpPort = strconv.Itoa(hp)
|
||||
}
|
||||
autoHTTPS := []string{}
|
||||
if ah, ok := options["auto_https"].([]string); ok {
|
||||
autoHTTPS = ah
|
||||
}
|
||||
|
||||
// count how many server blocks have a TLS-enabled key with
|
||||
// no host, and find all hosts that share a server block with
|
||||
// a hostless key, so that they don't get forgotten/omitted
|
||||
// by auto-HTTPS (since they won't appear in route matchers)
|
||||
var serverBlocksWithTLSHostlessKey int
|
||||
hostsSharedWithHostlessKey := make(map[string]struct{})
|
||||
for _, pair := range pairings {
|
||||
for _, sb := range pair.serverBlocks {
|
||||
for _, addr := range sb.keys {
|
||||
if addr.Host == "" {
|
||||
// this address has no hostname, but if it's explicitly set
|
||||
// to HTTPS, then we need to count it as being TLS-enabled
|
||||
if addr.Scheme == "https" || addr.Port == httpsPort {
|
||||
serverBlocksWithTLSHostlessKey++
|
||||
// find all hosts that share a server block with a hostless
|
||||
// key, so that they don't get forgotten/omitted by auto-HTTPS
|
||||
// (since they won't appear in route matchers)
|
||||
httpsHostsSharedWithHostlessKey := make(map[string]struct{})
|
||||
if !slices.Contains(autoHTTPS, "off") {
|
||||
for _, pair := range pairings {
|
||||
for _, sb := range pair.serverBlocks {
|
||||
for _, addr := range sb.parsedKeys {
|
||||
if addr.Host != "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// this server block has a hostless key, now
|
||||
// go through and add all the hosts to the set
|
||||
for _, otherAddr := range sb.keys {
|
||||
for _, otherAddr := range sb.parsedKeys {
|
||||
if otherAddr.Original == addr.Original {
|
||||
continue
|
||||
}
|
||||
if otherAddr.Host != "" {
|
||||
hostsSharedWithHostlessKey[otherAddr.Host] = struct{}{}
|
||||
if otherAddr.Host != "" && otherAddr.Scheme != "http" && otherAddr.Port != httpPort {
|
||||
httpsHostsSharedWithHostlessKey[otherAddr.Host] = struct{}{}
|
||||
}
|
||||
}
|
||||
break
|
||||
|
@ -74,166 +78,234 @@ func (st ServerType) buildTLSApp(
|
|||
}
|
||||
}
|
||||
|
||||
// a catch-all automation policy is used as a "default" for all subjects that
|
||||
// don't have custom configuration explicitly associated with them; this
|
||||
// is only to add if the global settings or defaults are non-empty
|
||||
catchAllAP, err := newBaseAutomationPolicy(options, warnings, false)
|
||||
if err != nil {
|
||||
return nil, warnings, err
|
||||
}
|
||||
if catchAllAP != nil {
|
||||
if tlsApp.Automation == nil {
|
||||
tlsApp.Automation = new(caddytls.AutomationConfig)
|
||||
}
|
||||
tlsApp.Automation.Policies = append(tlsApp.Automation.Policies, catchAllAP)
|
||||
}
|
||||
|
||||
// collect all hosts that have a wildcard in them, and arent HTTP
|
||||
wildcardHosts := []string{}
|
||||
for _, p := range pairings {
|
||||
var addresses []string
|
||||
for _, addressWithProtocols := range p.addressesWithProtocols {
|
||||
addresses = append(addresses, addressWithProtocols.address)
|
||||
}
|
||||
if !listenersUseAnyPortOtherThan(addresses, httpPort) {
|
||||
continue
|
||||
}
|
||||
for _, sblock := range p.serverBlocks {
|
||||
for _, addr := range sblock.parsedKeys {
|
||||
if strings.HasPrefix(addr.Host, "*.") {
|
||||
wildcardHosts = append(wildcardHosts, addr.Host[2:])
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for _, p := range pairings {
|
||||
// avoid setting up TLS automation policies for a server that is HTTP-only
|
||||
var addresses []string
|
||||
for _, addressWithProtocols := range p.addressesWithProtocols {
|
||||
addresses = append(addresses, addressWithProtocols.address)
|
||||
}
|
||||
if !listenersUseAnyPortOtherThan(addresses, httpPort) {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, sblock := range p.serverBlocks {
|
||||
// check the scheme of all the site addresses,
|
||||
// skip building AP if they all had http://
|
||||
if sblock.isAllHTTP() {
|
||||
continue
|
||||
}
|
||||
|
||||
// get values that populate an automation policy for this block
|
||||
var ap *caddytls.AutomationPolicy
|
||||
ap, err := newBaseAutomationPolicy(options, warnings, true)
|
||||
if err != nil {
|
||||
return nil, warnings, err
|
||||
}
|
||||
|
||||
// make a plain copy so we can compare whether we made any changes
|
||||
apCopy, err := newBaseAutomationPolicy(options, warnings, true)
|
||||
if err != nil {
|
||||
return nil, warnings, err
|
||||
}
|
||||
|
||||
sblockHosts := sblock.hostsFromKeys(false)
|
||||
if len(sblockHosts) == 0 {
|
||||
if len(sblockHosts) == 0 && catchAllAP != nil {
|
||||
ap = catchAllAP
|
||||
}
|
||||
|
||||
// on-demand tls
|
||||
if _, ok := sblock.pile["tls.on_demand"]; ok {
|
||||
if ap == nil {
|
||||
var err error
|
||||
ap, err = newBaseAutomationPolicy(options, warnings, true)
|
||||
if err != nil {
|
||||
return nil, warnings, err
|
||||
}
|
||||
}
|
||||
ap.OnDemand = true
|
||||
}
|
||||
|
||||
// reuse private keys tls
|
||||
if _, ok := sblock.pile["tls.reuse_private_keys"]; ok {
|
||||
ap.ReusePrivateKeys = true
|
||||
}
|
||||
|
||||
if keyTypeVals, ok := sblock.pile["tls.key_type"]; ok {
|
||||
ap.KeyType = keyTypeVals[0].Value.(string)
|
||||
}
|
||||
|
||||
// certificate issuers
|
||||
if issuerVals, ok := sblock.pile["tls.cert_issuer"]; ok {
|
||||
var issuers []certmagic.Issuer
|
||||
for _, issuerVal := range issuerVals {
|
||||
issuer := issuerVal.Value.(certmagic.Issuer)
|
||||
if ap == nil {
|
||||
var err error
|
||||
ap, err = newBaseAutomationPolicy(options, warnings, true)
|
||||
if err != nil {
|
||||
return nil, warnings, err
|
||||
}
|
||||
}
|
||||
if ap == catchAllAP && !reflect.DeepEqual(ap.Issuer, issuer) {
|
||||
return nil, warnings, fmt.Errorf("automation policy from site block is also default/catch-all policy because of key without hostname, and the two are in conflict: %#v != %#v", ap.Issuer, issuer)
|
||||
}
|
||||
ap.Issuer = issuer
|
||||
issuers = append(issuers, issuerVal.Value.(certmagic.Issuer))
|
||||
}
|
||||
if ap == catchAllAP && !reflect.DeepEqual(ap.Issuers, issuers) {
|
||||
// this more correctly implements an error check that was removed
|
||||
// below; try it with this config:
|
||||
//
|
||||
// :443 {
|
||||
// bind 127.0.0.1
|
||||
// }
|
||||
//
|
||||
// :443 {
|
||||
// bind ::1
|
||||
// tls {
|
||||
// issuer acme
|
||||
// }
|
||||
// }
|
||||
return nil, warnings, fmt.Errorf("automation policy from site block is also default/catch-all policy because of key without hostname, and the two are in conflict: %#v != %#v", ap.Issuers, issuers)
|
||||
}
|
||||
ap.Issuers = issuers
|
||||
}
|
||||
|
||||
// certificate managers
|
||||
if certManagerVals, ok := sblock.pile["tls.cert_manager"]; ok {
|
||||
for _, certManager := range certManagerVals {
|
||||
certGetterName := certManager.Value.(caddy.Module).CaddyModule().ID.Name()
|
||||
ap.ManagersRaw = append(ap.ManagersRaw, caddyconfig.JSONModuleObject(certManager.Value, "via", certGetterName, &warnings))
|
||||
}
|
||||
}
|
||||
// custom bind host
|
||||
for _, cfgVal := range sblock.pile["bind"] {
|
||||
// either an existing issuer is already configured (and thus, ap is not
|
||||
// nil), or we need to configure an issuer, so we need ap to be non-nil
|
||||
if ap == nil {
|
||||
ap, err = newBaseAutomationPolicy(options, warnings, true)
|
||||
if err != nil {
|
||||
return nil, warnings, err
|
||||
for _, iss := range ap.Issuers {
|
||||
// if an issuer was already configured and it is NOT an ACME issuer,
|
||||
// skip, since we intend to adjust only ACME issuers; ensure we
|
||||
// include any issuer that embeds/wraps an underlying ACME issuer
|
||||
var acmeIssuer *caddytls.ACMEIssuer
|
||||
if acmeWrapper, ok := iss.(acmeCapable); ok {
|
||||
acmeIssuer = acmeWrapper.GetACMEIssuer()
|
||||
}
|
||||
if acmeIssuer == nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// if an issuer was already configured and it is NOT an ACME
|
||||
// issuer, skip, since we intend to adjust only ACME issuers
|
||||
var acmeIssuer *caddytls.ACMEIssuer
|
||||
if ap.Issuer != nil {
|
||||
var ok bool
|
||||
if acmeIssuer, ok = ap.Issuer.(*caddytls.ACMEIssuer); !ok {
|
||||
break
|
||||
// proceed to configure the ACME issuer's bind host, without
|
||||
// overwriting any existing settings
|
||||
if acmeIssuer.Challenges == nil {
|
||||
acmeIssuer.Challenges = new(caddytls.ChallengesConfig)
|
||||
}
|
||||
if acmeIssuer.Challenges.BindHost == "" {
|
||||
// only binding to one host is supported
|
||||
var bindHost string
|
||||
if asserted, ok := cfgVal.Value.(addressesWithProtocols); ok && len(asserted.addresses) > 0 {
|
||||
bindHost = asserted.addresses[0]
|
||||
}
|
||||
acmeIssuer.Challenges.BindHost = bindHost
|
||||
}
|
||||
}
|
||||
|
||||
// proceed to configure the ACME issuer's bind host, without
|
||||
// overwriting any existing settings
|
||||
if acmeIssuer == nil {
|
||||
acmeIssuer = new(caddytls.ACMEIssuer)
|
||||
}
|
||||
if acmeIssuer.Challenges == nil {
|
||||
acmeIssuer.Challenges = new(caddytls.ChallengesConfig)
|
||||
}
|
||||
if acmeIssuer.Challenges.BindHost == "" {
|
||||
// only binding to one host is supported
|
||||
var bindHost string
|
||||
if bindHosts, ok := cfgVal.Value.([]string); ok && len(bindHosts) > 0 {
|
||||
bindHost = bindHosts[0]
|
||||
}
|
||||
acmeIssuer.Challenges.BindHost = bindHost
|
||||
}
|
||||
ap.Issuer = acmeIssuer // we'll encode it later
|
||||
}
|
||||
|
||||
if ap != nil {
|
||||
if ap.Issuer != nil {
|
||||
// encode issuer now that it's all set up
|
||||
issuerName := ap.Issuer.(caddy.Module).CaddyModule().ID.Name()
|
||||
ap.IssuerRaw = caddyconfig.JSONModuleObject(ap.Issuer, "module", issuerName, &warnings)
|
||||
}
|
||||
// we used to ensure this block is allowed to create an automation policy;
|
||||
// doing so was forbidden if it has a key with no host (i.e. ":443")
|
||||
// and if there is a different server block that also has a key with no
|
||||
// host -- since a key with no host matches any host, we need its
|
||||
// associated automation policy to have an empty Subjects list, i.e. no
|
||||
// host filter, which is indistinguishable between the two server blocks
|
||||
// because automation is not done in the context of a particular server...
|
||||
// this is an example of a poor mapping from Caddyfile to JSON but that's
|
||||
// the least-leaky abstraction I could figure out -- however, this check
|
||||
// was preventing certain listeners, like those provided by plugins, from
|
||||
// being used as desired (see the Tailscale listener plugin), so I removed
|
||||
// the check: and I think since I originally wrote the check I added a new
|
||||
// check above which *properly* detects this ambiguity without breaking the
|
||||
// listener plugin; see the check above with a commented example config
|
||||
if len(sblockHosts) == 0 && catchAllAP == nil {
|
||||
// this server block has a key with no hosts, but there is not yet
|
||||
// a catch-all automation policy (probably because no global options
|
||||
// were set), so this one becomes it
|
||||
catchAllAP = ap
|
||||
}
|
||||
|
||||
// first make sure this block is allowed to create an automation policy;
|
||||
// doing so is forbidden if it has a key with no host (i.e. ":443")
|
||||
// and if there is a different server block that also has a key with no
|
||||
// host -- since a key with no host matches any host, we need its
|
||||
// associated automation policy to have an empty Subjects list, i.e. no
|
||||
// host filter, which is indistinguishable between the two server blocks
|
||||
// because automation is not done in the context of a particular server...
|
||||
// this is an example of a poor mapping from Caddyfile to JSON but that's
|
||||
// the least-leaky abstraction I could figure out
|
||||
if len(sblockHosts) == 0 {
|
||||
if serverBlocksWithTLSHostlessKey > 1 {
|
||||
// this server block and at least one other has a key with no host,
|
||||
// making the two indistinguishable; it is misleading to define such
|
||||
// a policy within one server block since it actually will apply to
|
||||
// others as well
|
||||
return nil, warnings, fmt.Errorf("cannot make a TLS automation policy from a server block that has a host-less address when there are other TLS-enabled server block addresses lacking a host")
|
||||
hostsNotHTTP := sblock.hostsFromKeysNotHTTP(httpPort)
|
||||
sort.Strings(hostsNotHTTP) // solely for deterministic test results
|
||||
|
||||
// if the we prefer wildcards and the AP is unchanged,
|
||||
// then we can skip this AP because it should be covered
|
||||
// by an AP with a wildcard
|
||||
if slices.Contains(autoHTTPS, "prefer_wildcard") {
|
||||
if hostsCoveredByWildcard(hostsNotHTTP, wildcardHosts) &&
|
||||
reflect.DeepEqual(ap, apCopy) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// associate our new automation policy with this server block's hosts
|
||||
ap.SubjectsRaw = hostsNotHTTP
|
||||
|
||||
// if a combination of public and internal names were given
|
||||
// for this same server block and no issuer was specified, we
|
||||
// need to separate them out in the automation policies so
|
||||
// that the internal names can use the internal issuer and
|
||||
// the other names can use the default/public/ACME issuer
|
||||
var ap2 *caddytls.AutomationPolicy
|
||||
if len(ap.Issuers) == 0 {
|
||||
var internal, external []string
|
||||
for _, s := range ap.SubjectsRaw {
|
||||
// do not create Issuers for Tailscale domains; they will be given a Manager instead
|
||||
if isTailscaleDomain(s) {
|
||||
continue
|
||||
}
|
||||
if catchAllAP == nil {
|
||||
// this server block has a key with no hosts, but there is not yet
|
||||
// a catch-all automation policy (probably because no global options
|
||||
// were set), so this one becomes it
|
||||
catchAllAP = ap
|
||||
if !certmagic.SubjectQualifiesForCert(s) {
|
||||
return nil, warnings, fmt.Errorf("subject does not qualify for certificate: '%s'", s)
|
||||
}
|
||||
// we don't use certmagic.SubjectQualifiesForPublicCert() because of one nuance:
|
||||
// names like *.*.tld that may not qualify for a public certificate are actually
|
||||
// fine when used with OnDemand, since OnDemand (currently) does not obtain
|
||||
// wildcards (if it ever does, there will be a separate config option to enable
|
||||
// it that we would need to check here) since the hostname is known at handshake;
|
||||
// and it is unexpected to switch to internal issuer when the user wants to get
|
||||
// regular certificates on-demand for a class of certs like *.*.tld.
|
||||
if subjectQualifiesForPublicCert(ap, s) {
|
||||
external = append(external, s)
|
||||
} else {
|
||||
internal = append(internal, s)
|
||||
}
|
||||
}
|
||||
|
||||
// associate our new automation policy with this server block's hosts,
|
||||
// unless, of course, the server block has a key with no hosts, in which
|
||||
// case its automation policy becomes or blends with the default/global
|
||||
// automation policy because, of necessity, it applies to all hostnames
|
||||
// (i.e. it has no Subjects filter) -- in that case, we'll append it last
|
||||
if ap != catchAllAP {
|
||||
ap.Subjects = sblockHosts
|
||||
|
||||
// if a combination of public and internal names were given
|
||||
// for this same server block and no issuer was specified, we
|
||||
// need to separate them out in the automation policies so
|
||||
// that the internal names can use the internal issuer and
|
||||
// the other names can use the default/public/ACME issuer
|
||||
var ap2 *caddytls.AutomationPolicy
|
||||
if ap.Issuer == nil {
|
||||
var internal, external []string
|
||||
for _, s := range ap.Subjects {
|
||||
if certmagic.SubjectQualifiesForPublicCert(s) {
|
||||
external = append(external, s)
|
||||
} else {
|
||||
internal = append(internal, s)
|
||||
}
|
||||
}
|
||||
if len(external) > 0 && len(internal) > 0 {
|
||||
ap.Subjects = external
|
||||
apCopy := *ap
|
||||
ap2 = &apCopy
|
||||
ap2.Subjects = internal
|
||||
ap2.IssuerRaw = caddyconfig.JSONModuleObject(caddytls.InternalIssuer{}, "module", "internal", &warnings)
|
||||
}
|
||||
}
|
||||
if tlsApp.Automation == nil {
|
||||
tlsApp.Automation = new(caddytls.AutomationConfig)
|
||||
}
|
||||
tlsApp.Automation.Policies = append(tlsApp.Automation.Policies, ap)
|
||||
if ap2 != nil {
|
||||
tlsApp.Automation.Policies = append(tlsApp.Automation.Policies, ap2)
|
||||
}
|
||||
if len(external) > 0 && len(internal) > 0 {
|
||||
ap.SubjectsRaw = external
|
||||
apCopy := *ap
|
||||
ap2 = &apCopy
|
||||
ap2.SubjectsRaw = internal
|
||||
ap2.IssuersRaw = []json.RawMessage{caddyconfig.JSONModuleObject(caddytls.InternalIssuer{}, "module", "internal", &warnings)}
|
||||
}
|
||||
}
|
||||
|
||||
if tlsApp.Automation == nil {
|
||||
tlsApp.Automation = new(caddytls.AutomationConfig)
|
||||
}
|
||||
tlsApp.Automation.Policies = append(tlsApp.Automation.Policies, ap)
|
||||
if ap2 != nil {
|
||||
tlsApp.Automation.Policies = append(tlsApp.Automation.Policies, ap2)
|
||||
}
|
||||
|
||||
// certificate loaders
|
||||
if clVals, ok := sblock.pile["tls.certificate_loader"]; ok {
|
||||
if clVals, ok := sblock.pile["tls.cert_loader"]; ok {
|
||||
for _, clVal := range clVals {
|
||||
certLoaders = append(certLoaders, clVal.Value.(caddytls.CertificateLoader))
|
||||
}
|
||||
|
@ -277,6 +349,45 @@ func (st ServerType) buildTLSApp(
|
|||
tlsApp.Automation.OnDemand = onDemand
|
||||
}
|
||||
|
||||
// if the storage clean interval is a boolean, then it's "off" to disable cleaning
|
||||
if sc, ok := options["storage_check"].(string); ok && sc == "off" {
|
||||
tlsApp.DisableStorageCheck = true
|
||||
}
|
||||
|
||||
// if the storage clean interval is a boolean, then it's "off" to disable cleaning
|
||||
if sci, ok := options["storage_clean_interval"].(bool); ok && !sci {
|
||||
tlsApp.DisableStorageClean = true
|
||||
}
|
||||
|
||||
// set the storage clean interval if configured
|
||||
if storageCleanInterval, ok := options["storage_clean_interval"].(caddy.Duration); ok {
|
||||
if tlsApp.Automation == nil {
|
||||
tlsApp.Automation = new(caddytls.AutomationConfig)
|
||||
}
|
||||
tlsApp.Automation.StorageCleanInterval = storageCleanInterval
|
||||
}
|
||||
|
||||
// set the expired certificates renew interval if configured
|
||||
if renewCheckInterval, ok := options["renew_interval"].(caddy.Duration); ok {
|
||||
if tlsApp.Automation == nil {
|
||||
tlsApp.Automation = new(caddytls.AutomationConfig)
|
||||
}
|
||||
tlsApp.Automation.RenewCheckInterval = renewCheckInterval
|
||||
}
|
||||
|
||||
// set the OCSP check interval if configured
|
||||
if ocspCheckInterval, ok := options["ocsp_interval"].(caddy.Duration); ok {
|
||||
if tlsApp.Automation == nil {
|
||||
tlsApp.Automation = new(caddytls.AutomationConfig)
|
||||
}
|
||||
tlsApp.Automation.OCSPCheckInterval = ocspCheckInterval
|
||||
}
|
||||
|
||||
// set whether OCSP stapling should be disabled for manually-managed certificates
|
||||
if ocspConfig, ok := options["ocsp_stapling"].(certmagic.OCSPConfig); ok {
|
||||
tlsApp.DisableOCSPStapling = ocspConfig.DisableStapling
|
||||
}
|
||||
|
||||
// if any hostnames appear on the same server block as a key with
|
||||
// no host, they will not be used with route matchers because the
|
||||
// hostless key matches all hosts, therefore, it wouldn't be
|
||||
|
@ -286,47 +397,83 @@ func (st ServerType) buildTLSApp(
|
|||
// get internal certificates by default rather than ACME
|
||||
var al caddytls.AutomateLoader
|
||||
internalAP := &caddytls.AutomationPolicy{
|
||||
IssuerRaw: json.RawMessage(`{"module":"internal"}`),
|
||||
IssuersRaw: []json.RawMessage{json.RawMessage(`{"module":"internal"}`)},
|
||||
}
|
||||
for h := range hostsSharedWithHostlessKey {
|
||||
al = append(al, h)
|
||||
if !certmagic.SubjectQualifiesForPublicCert(h) {
|
||||
internalAP.Subjects = append(internalAP.Subjects, h)
|
||||
if !slices.Contains(autoHTTPS, "off") && !slices.Contains(autoHTTPS, "disable_certs") {
|
||||
for h := range httpsHostsSharedWithHostlessKey {
|
||||
al = append(al, h)
|
||||
if !certmagic.SubjectQualifiesForPublicCert(h) {
|
||||
internalAP.SubjectsRaw = append(internalAP.SubjectsRaw, h)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(al) > 0 {
|
||||
tlsApp.CertificatesRaw["automate"] = caddyconfig.JSON(al, &warnings)
|
||||
}
|
||||
if len(internalAP.Subjects) > 0 {
|
||||
if len(internalAP.SubjectsRaw) > 0 {
|
||||
if tlsApp.Automation == nil {
|
||||
tlsApp.Automation = new(caddytls.AutomationConfig)
|
||||
}
|
||||
tlsApp.Automation.Policies = append(tlsApp.Automation.Policies, internalAP)
|
||||
}
|
||||
|
||||
// if there is a global/catch-all automation policy, ensure it goes last
|
||||
if catchAllAP != nil {
|
||||
// first, encode its issuer, if there is one
|
||||
if catchAllAP.Issuer != nil {
|
||||
issuerName := catchAllAP.Issuer.(caddy.Module).CaddyModule().ID.Name()
|
||||
catchAllAP.IssuerRaw = caddyconfig.JSONModuleObject(catchAllAP.Issuer, "module", issuerName, &warnings)
|
||||
}
|
||||
// if there are any global options set for issuers (ACME ones in particular), make sure they
|
||||
// take effect in every automation policy that does not have any issuers
|
||||
if tlsApp.Automation != nil {
|
||||
globalEmail := options["email"]
|
||||
globalACMECA := options["acme_ca"]
|
||||
globalACMECARoot := options["acme_ca_root"]
|
||||
globalACMEDNS := options["acme_dns"]
|
||||
globalACMEEAB := options["acme_eab"]
|
||||
globalPreferredChains := options["preferred_chains"]
|
||||
hasGlobalACMEDefaults := globalEmail != nil || globalACMECA != nil || globalACMECARoot != nil || globalACMEDNS != nil || globalACMEEAB != nil || globalPreferredChains != nil
|
||||
if hasGlobalACMEDefaults {
|
||||
for i := 0; i < len(tlsApp.Automation.Policies); i++ {
|
||||
ap := tlsApp.Automation.Policies[i]
|
||||
if len(ap.Issuers) == 0 && automationPolicyHasAllPublicNames(ap) {
|
||||
// for public names, create default issuers which will later be filled in with configured global defaults
|
||||
// (internal names will implicitly use the internal issuer at auto-https time)
|
||||
emailStr, _ := globalEmail.(string)
|
||||
ap.Issuers = caddytls.DefaultIssuers(emailStr)
|
||||
|
||||
// then append it to the end of the policies list
|
||||
if tlsApp.Automation == nil {
|
||||
tlsApp.Automation = new(caddytls.AutomationConfig)
|
||||
// if a specific endpoint is configured, can't use multiple default issuers
|
||||
if globalACMECA != nil {
|
||||
ap.Issuers = []certmagic.Issuer{new(caddytls.ACMEIssuer)}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
tlsApp.Automation.Policies = append(tlsApp.Automation.Policies, catchAllAP)
|
||||
}
|
||||
|
||||
// do a little verification & cleanup
|
||||
// finalize and verify policies; do cleanup
|
||||
if tlsApp.Automation != nil {
|
||||
for i, ap := range tlsApp.Automation.Policies {
|
||||
// ensure all issuers have global defaults filled in
|
||||
for j, issuer := range ap.Issuers {
|
||||
err := fillInGlobalACMEDefaults(issuer, options)
|
||||
if err != nil {
|
||||
return nil, warnings, fmt.Errorf("filling in global issuer defaults for AP %d, issuer %d: %v", i, j, err)
|
||||
}
|
||||
}
|
||||
|
||||
// encode all issuer values we created, so they will be rendered in the output
|
||||
if len(ap.Issuers) > 0 && ap.IssuersRaw == nil {
|
||||
for _, iss := range ap.Issuers {
|
||||
issuerName := iss.(caddy.Module).CaddyModule().ID.Name()
|
||||
ap.IssuersRaw = append(ap.IssuersRaw, caddyconfig.JSONModuleObject(iss, "module", issuerName, &warnings))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// consolidate automation policies that are the exact same
|
||||
tlsApp.Automation.Policies = consolidateAutomationPolicies(tlsApp.Automation.Policies)
|
||||
|
||||
// ensure automation policies don't overlap subjects (this should be
|
||||
// an error at provision-time as well, but catch it in the adapt phase
|
||||
// for convenience)
|
||||
automationHostSet := make(map[string]struct{})
|
||||
for _, ap := range tlsApp.Automation.Policies {
|
||||
for _, s := range ap.Subjects {
|
||||
for _, s := range ap.SubjectsRaw {
|
||||
if _, ok := automationHostSet[s]; ok {
|
||||
return nil, warnings, fmt.Errorf("hostname appears in more than one automation policy, making certificate management ambiguous: %s", s)
|
||||
}
|
||||
|
@ -334,27 +481,101 @@ func (st ServerType) buildTLSApp(
|
|||
}
|
||||
}
|
||||
|
||||
// consolidate automation policies that are the exact same
|
||||
tlsApp.Automation.Policies = consolidateAutomationPolicies(tlsApp.Automation.Policies)
|
||||
// if nothing remains, remove any excess values to clean up the resulting config
|
||||
if len(tlsApp.Automation.Policies) == 0 {
|
||||
tlsApp.Automation.Policies = nil
|
||||
}
|
||||
if reflect.DeepEqual(tlsApp.Automation, new(caddytls.AutomationConfig)) {
|
||||
tlsApp.Automation = nil
|
||||
}
|
||||
}
|
||||
|
||||
return tlsApp, warnings, nil
|
||||
}
|
||||
|
||||
type acmeCapable interface{ GetACMEIssuer() *caddytls.ACMEIssuer }
|
||||
|
||||
func fillInGlobalACMEDefaults(issuer certmagic.Issuer, options map[string]any) error {
|
||||
acmeWrapper, ok := issuer.(acmeCapable)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
acmeIssuer := acmeWrapper.GetACMEIssuer()
|
||||
if acmeIssuer == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
globalEmail := options["email"]
|
||||
globalACMECA := options["acme_ca"]
|
||||
globalACMECARoot := options["acme_ca_root"]
|
||||
globalACMEDNS := options["acme_dns"]
|
||||
globalACMEEAB := options["acme_eab"]
|
||||
globalPreferredChains := options["preferred_chains"]
|
||||
globalCertLifetime := options["cert_lifetime"]
|
||||
globalHTTPPort, globalHTTPSPort := options["http_port"], options["https_port"]
|
||||
|
||||
if globalEmail != nil && acmeIssuer.Email == "" {
|
||||
acmeIssuer.Email = globalEmail.(string)
|
||||
}
|
||||
if globalACMECA != nil && acmeIssuer.CA == "" {
|
||||
acmeIssuer.CA = globalACMECA.(string)
|
||||
}
|
||||
if globalACMECARoot != nil && !slices.Contains(acmeIssuer.TrustedRootsPEMFiles, globalACMECARoot.(string)) {
|
||||
acmeIssuer.TrustedRootsPEMFiles = append(acmeIssuer.TrustedRootsPEMFiles, globalACMECARoot.(string))
|
||||
}
|
||||
if globalACMEDNS != nil && (acmeIssuer.Challenges == nil || acmeIssuer.Challenges.DNS == nil) {
|
||||
acmeIssuer.Challenges = &caddytls.ChallengesConfig{
|
||||
DNS: &caddytls.DNSChallengeConfig{
|
||||
ProviderRaw: caddyconfig.JSONModuleObject(globalACMEDNS, "name", globalACMEDNS.(caddy.Module).CaddyModule().ID.Name(), nil),
|
||||
},
|
||||
}
|
||||
}
|
||||
if globalACMEEAB != nil && acmeIssuer.ExternalAccount == nil {
|
||||
acmeIssuer.ExternalAccount = globalACMEEAB.(*acme.EAB)
|
||||
}
|
||||
if globalPreferredChains != nil && acmeIssuer.PreferredChains == nil {
|
||||
acmeIssuer.PreferredChains = globalPreferredChains.(*caddytls.ChainPreference)
|
||||
}
|
||||
if globalHTTPPort != nil && (acmeIssuer.Challenges == nil || acmeIssuer.Challenges.HTTP == nil || acmeIssuer.Challenges.HTTP.AlternatePort == 0) {
|
||||
if acmeIssuer.Challenges == nil {
|
||||
acmeIssuer.Challenges = new(caddytls.ChallengesConfig)
|
||||
}
|
||||
if acmeIssuer.Challenges.HTTP == nil {
|
||||
acmeIssuer.Challenges.HTTP = new(caddytls.HTTPChallengeConfig)
|
||||
}
|
||||
acmeIssuer.Challenges.HTTP.AlternatePort = globalHTTPPort.(int)
|
||||
}
|
||||
if globalHTTPSPort != nil && (acmeIssuer.Challenges == nil || acmeIssuer.Challenges.TLSALPN == nil || acmeIssuer.Challenges.TLSALPN.AlternatePort == 0) {
|
||||
if acmeIssuer.Challenges == nil {
|
||||
acmeIssuer.Challenges = new(caddytls.ChallengesConfig)
|
||||
}
|
||||
if acmeIssuer.Challenges.TLSALPN == nil {
|
||||
acmeIssuer.Challenges.TLSALPN = new(caddytls.TLSALPNChallengeConfig)
|
||||
}
|
||||
acmeIssuer.Challenges.TLSALPN.AlternatePort = globalHTTPSPort.(int)
|
||||
}
|
||||
if globalCertLifetime != nil && acmeIssuer.CertificateLifetime == 0 {
|
||||
acmeIssuer.CertificateLifetime = globalCertLifetime.(caddy.Duration)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// newBaseAutomationPolicy returns a new TLS automation policy that gets
|
||||
// its values from the global options map. It should be used as the base
|
||||
// for any other automation policies. A nil policy (and no error) will be
|
||||
// returned if there are no default/global options. However, if always is
|
||||
// true, a non-nil value will always be returned (unless there is an error).
|
||||
func newBaseAutomationPolicy(options map[string]interface{}, warnings []caddyconfig.Warning, always bool) (*caddytls.AutomationPolicy, error) {
|
||||
acmeCA, hasACMECA := options["acme_ca"]
|
||||
acmeDNS, hasACMEDNS := options["acme_dns"]
|
||||
acmeCARoot, hasACMECARoot := options["acme_ca_root"]
|
||||
email, hasEmail := options["email"]
|
||||
localCerts, hasLocalCerts := options["local_certs"]
|
||||
func newBaseAutomationPolicy(
|
||||
options map[string]any,
|
||||
_ []caddyconfig.Warning,
|
||||
always bool,
|
||||
) (*caddytls.AutomationPolicy, error) {
|
||||
issuers, hasIssuers := options["cert_issuer"]
|
||||
_, hasLocalCerts := options["local_certs"]
|
||||
keyType, hasKeyType := options["key_type"]
|
||||
ocspStapling, hasOCSPStapling := options["ocsp_stapling"]
|
||||
|
||||
hasGlobalAutomationOpts := hasACMECA || hasACMEDNS || hasACMECARoot || hasEmail || hasLocalCerts || hasKeyType
|
||||
hasGlobalAutomationOpts := hasIssuers || hasLocalCerts || hasKeyType || hasOCSPStapling
|
||||
|
||||
// if there are no global options related to automation policies
|
||||
// set, then we can just return right away
|
||||
|
@ -366,38 +587,24 @@ func newBaseAutomationPolicy(options map[string]interface{}, warnings []caddycon
|
|||
}
|
||||
|
||||
ap := new(caddytls.AutomationPolicy)
|
||||
if hasKeyType {
|
||||
ap.KeyType = keyType.(string)
|
||||
}
|
||||
|
||||
if localCerts != nil {
|
||||
// internal issuer enabled trumps any ACME configurations; useful in testing
|
||||
ap.Issuer = new(caddytls.InternalIssuer) // we'll encode it later
|
||||
} else {
|
||||
if acmeCA == nil {
|
||||
acmeCA = ""
|
||||
}
|
||||
if email == nil {
|
||||
email = ""
|
||||
}
|
||||
mgr := &caddytls.ACMEIssuer{
|
||||
CA: acmeCA.(string),
|
||||
Email: email.(string),
|
||||
}
|
||||
if acmeDNS != nil {
|
||||
provName := acmeDNS.(string)
|
||||
dnsProvModule, err := caddy.GetModule("tls.dns." + provName)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting DNS provider module named '%s': %v", provName, err)
|
||||
}
|
||||
mgr.Challenges = &caddytls.ChallengesConfig{
|
||||
DNSRaw: caddyconfig.JSONModuleObject(dnsProvModule.New(), "provider", provName, &warnings),
|
||||
}
|
||||
}
|
||||
if acmeCARoot != nil {
|
||||
mgr.TrustedRootsPEMFiles = []string{acmeCARoot.(string)}
|
||||
}
|
||||
if keyType != nil {
|
||||
ap.KeyType = keyType.(string)
|
||||
}
|
||||
ap.Issuer = mgr // we'll encode it later
|
||||
if hasIssuers && hasLocalCerts {
|
||||
return nil, fmt.Errorf("global options are ambiguous: local_certs is confusing when combined with cert_issuer, because local_certs is also a specific kind of issuer")
|
||||
}
|
||||
|
||||
if hasIssuers {
|
||||
ap.Issuers = issuers.([]certmagic.Issuer)
|
||||
} else if hasLocalCerts {
|
||||
ap.Issuers = []certmagic.Issuer{new(caddytls.InternalIssuer)}
|
||||
}
|
||||
|
||||
if hasOCSPStapling {
|
||||
ocspConfig := ocspStapling.(certmagic.OCSPConfig)
|
||||
ap.DisableOCSPStapling = ocspConfig.DisableStapling
|
||||
ap.OCSPOverrides = ocspConfig.ResponderOverrides
|
||||
}
|
||||
|
||||
return ap, nil
|
||||
|
@ -406,17 +613,50 @@ func newBaseAutomationPolicy(options map[string]interface{}, warnings []caddycon
|
|||
// consolidateAutomationPolicies combines automation policies that are the same,
|
||||
// for a cleaner overall output.
|
||||
func consolidateAutomationPolicies(aps []*caddytls.AutomationPolicy) []*caddytls.AutomationPolicy {
|
||||
for i := 0; i < len(aps); i++ {
|
||||
for j := 0; j < len(aps); j++ {
|
||||
if j == i {
|
||||
continue
|
||||
}
|
||||
// sort from most specific to least specific; we depend on this ordering
|
||||
sort.SliceStable(aps, func(i, j int) bool {
|
||||
if automationPolicyIsSubset(aps[i], aps[j]) {
|
||||
return true
|
||||
}
|
||||
if automationPolicyIsSubset(aps[j], aps[i]) {
|
||||
return false
|
||||
}
|
||||
return len(aps[i].SubjectsRaw) > len(aps[j].SubjectsRaw)
|
||||
})
|
||||
|
||||
emptyAPCount := 0
|
||||
origLenAPs := len(aps)
|
||||
// compute the number of empty policies (disregarding subjects) - see #4128
|
||||
emptyAP := new(caddytls.AutomationPolicy)
|
||||
for i := 0; i < len(aps); i++ {
|
||||
emptyAP.SubjectsRaw = aps[i].SubjectsRaw
|
||||
if reflect.DeepEqual(aps[i], emptyAP) {
|
||||
emptyAPCount++
|
||||
if !automationPolicyHasAllPublicNames(aps[i]) {
|
||||
// if this automation policy has internal names, we might as well remove it
|
||||
// so auto-https can implicitly use the internal issuer
|
||||
aps = slices.Delete(aps, i, i+1)
|
||||
i--
|
||||
}
|
||||
}
|
||||
}
|
||||
// If all policies are empty, we can return nil, as there is no need to set any policy
|
||||
if emptyAPCount == origLenAPs {
|
||||
return nil
|
||||
}
|
||||
|
||||
// remove or combine duplicate policies
|
||||
outer:
|
||||
for i := 0; i < len(aps); i++ {
|
||||
// compare only with next policies; we sorted by specificity so we must not delete earlier policies
|
||||
for j := i + 1; j < len(aps); j++ {
|
||||
// if they're exactly equal in every way, just keep one of them
|
||||
if reflect.DeepEqual(aps[i], aps[j]) {
|
||||
aps = append(aps[:j], aps[j+1:]...)
|
||||
aps = slices.Delete(aps, j, j+1)
|
||||
// must re-evaluate current i against next j; can't skip it!
|
||||
// even if i decrements to -1, will be incremented to 0 immediately
|
||||
i--
|
||||
break
|
||||
continue outer
|
||||
}
|
||||
|
||||
// if the policy is the same, we can keep just one, but we have
|
||||
|
@ -425,30 +665,123 @@ func consolidateAutomationPolicies(aps []*caddytls.AutomationPolicy) []*caddytls
|
|||
// otherwise the one without any subjects (a catch-all) would be
|
||||
// eaten up by the one with subjects; and if both have subjects, we
|
||||
// need to combine their lists
|
||||
if bytes.Equal(aps[i].IssuerRaw, aps[j].IssuerRaw) &&
|
||||
if reflect.DeepEqual(aps[i].IssuersRaw, aps[j].IssuersRaw) &&
|
||||
reflect.DeepEqual(aps[i].ManagersRaw, aps[j].ManagersRaw) &&
|
||||
bytes.Equal(aps[i].StorageRaw, aps[j].StorageRaw) &&
|
||||
aps[i].MustStaple == aps[j].MustStaple &&
|
||||
aps[i].KeyType == aps[j].KeyType &&
|
||||
aps[i].OnDemand == aps[j].OnDemand &&
|
||||
aps[i].ReusePrivateKeys == aps[j].ReusePrivateKeys &&
|
||||
aps[i].RenewalWindowRatio == aps[j].RenewalWindowRatio {
|
||||
if len(aps[i].Subjects) == 0 && len(aps[j].Subjects) > 0 {
|
||||
aps = append(aps[:j], aps[j+1:]...)
|
||||
} else if len(aps[i].Subjects) > 0 && len(aps[j].Subjects) == 0 {
|
||||
aps = append(aps[:i], aps[i+1:]...)
|
||||
if len(aps[i].SubjectsRaw) > 0 && len(aps[j].SubjectsRaw) == 0 {
|
||||
// later policy (at j) has no subjects ("catch-all"), so we can
|
||||
// remove the identical-but-more-specific policy that comes first
|
||||
// AS LONG AS it is not shadowed by another policy before it; e.g.
|
||||
// if policy i is for example.com, policy i+1 is '*.com', and policy
|
||||
// j is catch-all, we cannot remove policy i because that would
|
||||
// cause example.com to be served by the less specific policy for
|
||||
// '*.com', which might be different (yes we've seen this happen)
|
||||
if automationPolicyShadows(i, aps) >= j {
|
||||
aps = slices.Delete(aps, i, i+1)
|
||||
i--
|
||||
continue outer
|
||||
}
|
||||
} else {
|
||||
aps[i].Subjects = append(aps[i].Subjects, aps[j].Subjects...)
|
||||
aps = append(aps[:j], aps[j+1:]...)
|
||||
// avoid repeated subjects
|
||||
for _, subj := range aps[j].SubjectsRaw {
|
||||
if !slices.Contains(aps[i].SubjectsRaw, subj) {
|
||||
aps[i].SubjectsRaw = append(aps[i].SubjectsRaw, subj)
|
||||
}
|
||||
}
|
||||
aps = slices.Delete(aps, j, j+1)
|
||||
j--
|
||||
}
|
||||
i--
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ensure any catch-all policies go last
|
||||
sort.SliceStable(aps, func(i, j int) bool {
|
||||
return len(aps[i].Subjects) > len(aps[j].Subjects)
|
||||
})
|
||||
|
||||
return aps
|
||||
}
|
||||
|
||||
// automationPolicyIsSubset returns true if a's subjects are a subset
|
||||
// of b's subjects.
|
||||
func automationPolicyIsSubset(a, b *caddytls.AutomationPolicy) bool {
|
||||
if len(b.SubjectsRaw) == 0 {
|
||||
return true
|
||||
}
|
||||
if len(a.SubjectsRaw) == 0 {
|
||||
return false
|
||||
}
|
||||
for _, aSubj := range a.SubjectsRaw {
|
||||
inSuperset := slices.ContainsFunc(b.SubjectsRaw, func(bSubj string) bool {
|
||||
return certmagic.MatchWildcard(aSubj, bSubj)
|
||||
})
|
||||
if !inSuperset {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// automationPolicyShadows returns the index of a policy that aps[i] shadows;
|
||||
// in other words, for all policies after position i, if that policy covers
|
||||
// the same subjects but is less specific, that policy's position is returned,
|
||||
// or -1 if no shadowing is found. For example, if policy i is for
|
||||
// "foo.example.com" and policy i+2 is for "*.example.com", then i+2 will be
|
||||
// returned, since that policy is shadowed by i, which is in front.
|
||||
func automationPolicyShadows(i int, aps []*caddytls.AutomationPolicy) int {
|
||||
for j := i + 1; j < len(aps); j++ {
|
||||
if automationPolicyIsSubset(aps[i], aps[j]) {
|
||||
return j
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
// subjectQualifiesForPublicCert is like certmagic.SubjectQualifiesForPublicCert() except
|
||||
// that this allows domains with multiple wildcard levels like '*.*.example.com' to qualify
|
||||
// if the automation policy has OnDemand enabled (i.e. this function is more lenient).
|
||||
//
|
||||
// IP subjects are considered as non-qualifying for public certs. Technically, there are
|
||||
// now public ACME CAs as well as non-ACME CAs that issue IP certificates. But this function
|
||||
// is used solely for implicit automation (defaults), where it gets really complicated to
|
||||
// keep track of which issuers support IP certificates in which circumstances. Currently,
|
||||
// issuers that support IP certificates are very few, and all require some sort of config
|
||||
// from the user anyway (such as an account credential). Since we cannot implicitly and
|
||||
// automatically get public IP certs without configuration from the user, we treat IPs as
|
||||
// not qualifying for public certificates. Users should expressly configure an issuer
|
||||
// that supports IP certs for that purpose.
|
||||
func subjectQualifiesForPublicCert(ap *caddytls.AutomationPolicy, subj string) bool {
|
||||
return !certmagic.SubjectIsIP(subj) &&
|
||||
!certmagic.SubjectIsInternal(subj) &&
|
||||
(strings.Count(subj, "*.") < 2 || ap.OnDemand)
|
||||
}
|
||||
|
||||
// automationPolicyHasAllPublicNames returns true if all the names on the policy
|
||||
// do NOT qualify for public certs OR are tailscale domains.
|
||||
func automationPolicyHasAllPublicNames(ap *caddytls.AutomationPolicy) bool {
|
||||
return !slices.ContainsFunc(ap.SubjectsRaw, func(i string) bool {
|
||||
return !subjectQualifiesForPublicCert(ap, i) || isTailscaleDomain(i)
|
||||
})
|
||||
}
|
||||
|
||||
func isTailscaleDomain(name string) bool {
|
||||
return strings.HasSuffix(strings.ToLower(name), ".ts.net")
|
||||
}
|
||||
|
||||
func hostsCoveredByWildcard(hosts []string, wildcards []string) bool {
|
||||
if len(hosts) == 0 || len(wildcards) == 0 {
|
||||
return false
|
||||
}
|
||||
for _, host := range hosts {
|
||||
for _, wildcard := range wildcards {
|
||||
if strings.HasPrefix(host, "*.") {
|
||||
continue
|
||||
}
|
||||
if certmagic.MatchWildcard(host, "*."+wildcard) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
|
56
caddyconfig/httpcaddyfile/tlsapp_test.go
Normal file
56
caddyconfig/httpcaddyfile/tlsapp_test.go
Normal file
|
@ -0,0 +1,56 @@
|
|||
package httpcaddyfile
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/caddyserver/caddy/v2/modules/caddytls"
|
||||
)
|
||||
|
||||
func TestAutomationPolicyIsSubset(t *testing.T) {
|
||||
for i, test := range []struct {
|
||||
a, b []string
|
||||
expect bool
|
||||
}{
|
||||
{
|
||||
a: []string{"example.com"},
|
||||
b: []string{},
|
||||
expect: true,
|
||||
},
|
||||
{
|
||||
a: []string{},
|
||||
b: []string{"example.com"},
|
||||
expect: false,
|
||||
},
|
||||
{
|
||||
a: []string{"foo.example.com"},
|
||||
b: []string{"*.example.com"},
|
||||
expect: true,
|
||||
},
|
||||
{
|
||||
a: []string{"foo.example.com"},
|
||||
b: []string{"foo.example.com"},
|
||||
expect: true,
|
||||
},
|
||||
{
|
||||
a: []string{"foo.example.com"},
|
||||
b: []string{"example.com"},
|
||||
expect: false,
|
||||
},
|
||||
{
|
||||
a: []string{"example.com", "foo.example.com"},
|
||||
b: []string{"*.com", "*.*.com"},
|
||||
expect: true,
|
||||
},
|
||||
{
|
||||
a: []string{"example.com", "foo.example.com"},
|
||||
b: []string{"*.com"},
|
||||
expect: false,
|
||||
},
|
||||
} {
|
||||
apA := &caddytls.AutomationPolicy{SubjectsRaw: test.a}
|
||||
apB := &caddytls.AutomationPolicy{SubjectsRaw: test.b}
|
||||
if actual := automationPolicyIsSubset(apA, apB); actual != test.expect {
|
||||
t.Errorf("Test %d: Expected %t but got %t (A: %v B: %v)", i, test.expect, actual, test.a, test.b)
|
||||
}
|
||||
}
|
||||
}
|
218
caddyconfig/httploader.go
Normal file
218
caddyconfig/httploader.go
Normal file
|
@ -0,0 +1,218 @@
|
|||
// Copyright 2015 Matthew Holt and The Caddy Authors
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package caddyconfig
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
)
|
||||
|
||||
func init() {
|
||||
caddy.RegisterModule(HTTPLoader{})
|
||||
}
|
||||
|
||||
// HTTPLoader can load Caddy configs over HTTP(S).
|
||||
//
|
||||
// If the response is not a JSON config, a config adapter must be specified
|
||||
// either in the loader config (`adapter`), or in the Content-Type HTTP header
|
||||
// returned in the HTTP response from the server. The Content-Type header is
|
||||
// read just like the admin API's `/load` endpoint. If you don't have control
|
||||
// over the HTTP server (but can still trust its response), you can override
|
||||
// the Content-Type header by setting the `adapter` property in this config.
|
||||
type HTTPLoader struct {
|
||||
// The method for the request. Default: GET
|
||||
Method string `json:"method,omitempty"`
|
||||
|
||||
// The URL of the request.
|
||||
URL string `json:"url,omitempty"`
|
||||
|
||||
// HTTP headers to add to the request.
|
||||
Headers http.Header `json:"header,omitempty"`
|
||||
|
||||
// Maximum time allowed for a complete connection and request.
|
||||
Timeout caddy.Duration `json:"timeout,omitempty"`
|
||||
|
||||
// The name of the config adapter to use, if any. Only needed
|
||||
// if the HTTP response is not a JSON config and if the server's
|
||||
// Content-Type header is missing or incorrect.
|
||||
Adapter string `json:"adapter,omitempty"`
|
||||
|
||||
TLS *struct {
|
||||
// Present this instance's managed remote identity credentials to the server.
|
||||
UseServerIdentity bool `json:"use_server_identity,omitempty"`
|
||||
|
||||
// PEM-encoded client certificate filename to present to the server.
|
||||
ClientCertificateFile string `json:"client_certificate_file,omitempty"`
|
||||
|
||||
// PEM-encoded key to use with the client certificate.
|
||||
ClientCertificateKeyFile string `json:"client_certificate_key_file,omitempty"`
|
||||
|
||||
// List of PEM-encoded CA certificate files to add to the same trust
|
||||
// store as RootCAPool (or root_ca_pool in the JSON).
|
||||
RootCAPEMFiles []string `json:"root_ca_pem_files,omitempty"`
|
||||
} `json:"tls,omitempty"`
|
||||
}
|
||||
|
||||
// CaddyModule returns the Caddy module information.
|
||||
func (HTTPLoader) CaddyModule() caddy.ModuleInfo {
|
||||
return caddy.ModuleInfo{
|
||||
ID: "caddy.config_loaders.http",
|
||||
New: func() caddy.Module { return new(HTTPLoader) },
|
||||
}
|
||||
}
|
||||
|
||||
// LoadConfig loads a Caddy config.
|
||||
func (hl HTTPLoader) LoadConfig(ctx caddy.Context) ([]byte, error) {
|
||||
repl := caddy.NewReplacer()
|
||||
|
||||
client, err := hl.makeClient(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
method := repl.ReplaceAll(hl.Method, "")
|
||||
if method == "" {
|
||||
method = http.MethodGet
|
||||
}
|
||||
|
||||
url := repl.ReplaceAll(hl.URL, "")
|
||||
req, err := http.NewRequest(method, url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for key, vals := range hl.Headers {
|
||||
for _, val := range vals {
|
||||
req.Header.Add(repl.ReplaceAll(key, ""), repl.ReplaceKnown(val, ""))
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := doHttpCallWithRetries(ctx, client, req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode >= 400 {
|
||||
return nil, fmt.Errorf("server responded with HTTP %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// adapt the config based on either manually-configured adapter or server's response header
|
||||
ct := resp.Header.Get("Content-Type")
|
||||
if hl.Adapter != "" {
|
||||
ct = "text/" + hl.Adapter
|
||||
}
|
||||
result, warnings, err := adaptByContentType(ct, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, warn := range warnings {
|
||||
ctx.Logger().Warn(warn.String())
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func attemptHttpCall(client *http.Client, request *http.Request) (*http.Response, error) {
|
||||
resp, err := client.Do(request)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("problem calling http loader url: %v", err)
|
||||
} else if resp.StatusCode < 200 || resp.StatusCode > 499 {
|
||||
resp.Body.Close()
|
||||
return nil, fmt.Errorf("bad response status code from http loader url: %v", resp.StatusCode)
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
func doHttpCallWithRetries(ctx caddy.Context, client *http.Client, request *http.Request) (*http.Response, error) {
|
||||
var resp *http.Response
|
||||
var err error
|
||||
const maxAttempts = 10
|
||||
|
||||
for i := 0; i < maxAttempts; i++ {
|
||||
resp, err = attemptHttpCall(client, request)
|
||||
if err != nil && i < maxAttempts-1 {
|
||||
select {
|
||||
case <-time.After(time.Millisecond * 500):
|
||||
case <-ctx.Done():
|
||||
return resp, ctx.Err()
|
||||
}
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return resp, err
|
||||
}
|
||||
|
||||
func (hl HTTPLoader) makeClient(ctx caddy.Context) (*http.Client, error) {
|
||||
client := &http.Client{
|
||||
Timeout: time.Duration(hl.Timeout),
|
||||
}
|
||||
|
||||
if hl.TLS != nil {
|
||||
var tlsConfig *tls.Config
|
||||
|
||||
// client authentication
|
||||
if hl.TLS.UseServerIdentity {
|
||||
certs, err := ctx.IdentityCredentials(ctx.Logger())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting server identity credentials: %v", err)
|
||||
}
|
||||
// See https://github.com/securego/gosec/issues/1054#issuecomment-2072235199
|
||||
//nolint:gosec
|
||||
tlsConfig = &tls.Config{Certificates: certs}
|
||||
} else if hl.TLS.ClientCertificateFile != "" && hl.TLS.ClientCertificateKeyFile != "" {
|
||||
cert, err := tls.LoadX509KeyPair(hl.TLS.ClientCertificateFile, hl.TLS.ClientCertificateKeyFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
//nolint:gosec
|
||||
tlsConfig = &tls.Config{Certificates: []tls.Certificate{cert}}
|
||||
}
|
||||
|
||||
// trusted server certs
|
||||
if len(hl.TLS.RootCAPEMFiles) > 0 {
|
||||
rootPool := x509.NewCertPool()
|
||||
for _, pemFile := range hl.TLS.RootCAPEMFiles {
|
||||
pemData, err := os.ReadFile(pemFile)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed reading ca cert: %v", err)
|
||||
}
|
||||
rootPool.AppendCertsFromPEM(pemData)
|
||||
}
|
||||
if tlsConfig == nil {
|
||||
tlsConfig = new(tls.Config)
|
||||
}
|
||||
tlsConfig.RootCAs = rootPool
|
||||
}
|
||||
|
||||
client.Transport = &http.Transport{TLSClientConfig: tlsConfig}
|
||||
}
|
||||
|
||||
return client, nil
|
||||
}
|
||||
|
||||
var _ caddy.ConfigLoader = (*HTTPLoader)(nil)
|
|
@ -58,6 +58,10 @@ func (al adminLoad) Routes() []caddy.AdminRoute {
|
|||
Pattern: "/load",
|
||||
Handler: caddy.AdminHandlerFunc(al.handleLoad),
|
||||
},
|
||||
{
|
||||
Pattern: "/adapt",
|
||||
Handler: caddy.AdminHandlerFunc(al.handleAdapt),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -69,8 +73,8 @@ func (al adminLoad) Routes() []caddy.AdminRoute {
|
|||
func (adminLoad) handleLoad(w http.ResponseWriter, r *http.Request) error {
|
||||
if r.Method != http.MethodPost {
|
||||
return caddy.APIError{
|
||||
Code: http.StatusMethodNotAllowed,
|
||||
Err: fmt.Errorf("method not allowed"),
|
||||
HTTPStatus: http.StatusMethodNotAllowed,
|
||||
Err: fmt.Errorf("method not allowed"),
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -81,8 +85,8 @@ func (adminLoad) handleLoad(w http.ResponseWriter, r *http.Request) error {
|
|||
_, err := io.Copy(buf, r.Body)
|
||||
if err != nil {
|
||||
return caddy.APIError{
|
||||
Code: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("reading request body: %v", err),
|
||||
HTTPStatus: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("reading request body: %v", err),
|
||||
}
|
||||
}
|
||||
body := buf.Bytes()
|
||||
|
@ -90,45 +94,21 @@ func (adminLoad) handleLoad(w http.ResponseWriter, r *http.Request) error {
|
|||
// if the config is formatted other than Caddy's native
|
||||
// JSON, we need to adapt it before loading it
|
||||
if ctHeader := r.Header.Get("Content-Type"); ctHeader != "" {
|
||||
ct, _, err := mime.ParseMediaType(ctHeader)
|
||||
result, warnings, err := adaptByContentType(ctHeader, body)
|
||||
if err != nil {
|
||||
return caddy.APIError{
|
||||
Code: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("invalid Content-Type: %v", err),
|
||||
HTTPStatus: http.StatusBadRequest,
|
||||
Err: err,
|
||||
}
|
||||
}
|
||||
if !strings.HasSuffix(ct, "/json") {
|
||||
slashIdx := strings.Index(ct, "/")
|
||||
if slashIdx < 0 {
|
||||
return caddy.APIError{
|
||||
Code: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("malformed Content-Type"),
|
||||
}
|
||||
}
|
||||
adapterName := ct[slashIdx+1:]
|
||||
cfgAdapter := GetAdapter(adapterName)
|
||||
if cfgAdapter == nil {
|
||||
return caddy.APIError{
|
||||
Code: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("unrecognized config adapter '%s'", adapterName),
|
||||
}
|
||||
}
|
||||
result, warnings, err := cfgAdapter.Adapt(body, nil)
|
||||
if len(warnings) > 0 {
|
||||
respBody, err := json.Marshal(warnings)
|
||||
if err != nil {
|
||||
return caddy.APIError{
|
||||
Code: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("adapting config using %s adapter: %v", adapterName, err),
|
||||
}
|
||||
caddy.Log().Named("admin.api.load").Error(err.Error())
|
||||
}
|
||||
if len(warnings) > 0 {
|
||||
respBody, err := json.Marshal(warnings)
|
||||
if err != nil {
|
||||
caddy.Log().Named("admin.api.load").Error(err.Error())
|
||||
}
|
||||
_, _ = w.Write(respBody)
|
||||
}
|
||||
body = result
|
||||
_, _ = w.Write(respBody)
|
||||
}
|
||||
body = result
|
||||
}
|
||||
|
||||
forceReload := r.Header.Get("Cache-Control") == "must-revalidate"
|
||||
|
@ -136,8 +116,8 @@ func (adminLoad) handleLoad(w http.ResponseWriter, r *http.Request) error {
|
|||
err = caddy.Load(body, forceReload)
|
||||
if err != nil {
|
||||
return caddy.APIError{
|
||||
Code: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("loading config: %v", err),
|
||||
HTTPStatus: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("loading config: %v", err),
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -146,8 +126,89 @@ func (adminLoad) handleLoad(w http.ResponseWriter, r *http.Request) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// handleAdapt adapts the given Caddy config to JSON and responds with the result.
|
||||
func (adminLoad) handleAdapt(w http.ResponseWriter, r *http.Request) error {
|
||||
if r.Method != http.MethodPost {
|
||||
return caddy.APIError{
|
||||
HTTPStatus: http.StatusMethodNotAllowed,
|
||||
Err: fmt.Errorf("method not allowed"),
|
||||
}
|
||||
}
|
||||
|
||||
buf := bufPool.Get().(*bytes.Buffer)
|
||||
buf.Reset()
|
||||
defer bufPool.Put(buf)
|
||||
|
||||
_, err := io.Copy(buf, r.Body)
|
||||
if err != nil {
|
||||
return caddy.APIError{
|
||||
HTTPStatus: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("reading request body: %v", err),
|
||||
}
|
||||
}
|
||||
|
||||
result, warnings, err := adaptByContentType(r.Header.Get("Content-Type"), buf.Bytes())
|
||||
if err != nil {
|
||||
return caddy.APIError{
|
||||
HTTPStatus: http.StatusBadRequest,
|
||||
Err: err,
|
||||
}
|
||||
}
|
||||
|
||||
out := struct {
|
||||
Warnings []Warning `json:"warnings,omitempty"`
|
||||
Result json.RawMessage `json:"result"`
|
||||
}{
|
||||
Warnings: warnings,
|
||||
Result: result,
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
return json.NewEncoder(w).Encode(out)
|
||||
}
|
||||
|
||||
// adaptByContentType adapts body to Caddy JSON using the adapter specified by contentType.
|
||||
// If contentType is empty or ends with "/json", the input will be returned, as a no-op.
|
||||
func adaptByContentType(contentType string, body []byte) ([]byte, []Warning, error) {
|
||||
// assume JSON as the default
|
||||
if contentType == "" {
|
||||
return body, nil, nil
|
||||
}
|
||||
|
||||
ct, _, err := mime.ParseMediaType(contentType)
|
||||
if err != nil {
|
||||
return nil, nil, caddy.APIError{
|
||||
HTTPStatus: http.StatusBadRequest,
|
||||
Err: fmt.Errorf("invalid Content-Type: %v", err),
|
||||
}
|
||||
}
|
||||
|
||||
// if already JSON, no need to adapt
|
||||
if strings.HasSuffix(ct, "/json") {
|
||||
return body, nil, nil
|
||||
}
|
||||
|
||||
// adapter name should be suffix of MIME type
|
||||
_, adapterName, slashFound := strings.Cut(ct, "/")
|
||||
if !slashFound {
|
||||
return nil, nil, fmt.Errorf("malformed Content-Type")
|
||||
}
|
||||
|
||||
cfgAdapter := GetAdapter(adapterName)
|
||||
if cfgAdapter == nil {
|
||||
return nil, nil, fmt.Errorf("unrecognized config adapter '%s'", adapterName)
|
||||
}
|
||||
|
||||
result, warnings, err := cfgAdapter.Adapt(body, nil)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("adapting config using %s adapter: %v", adapterName, err)
|
||||
}
|
||||
|
||||
return result, warnings, nil
|
||||
}
|
||||
|
||||
var bufPool = sync.Pool{
|
||||
New: func() interface{} {
|
||||
New: func() any {
|
||||
return new(bytes.Buffer)
|
||||
},
|
||||
}
|
||||
|
|
20
caddytest/caddy.ca.cer
Normal file
20
caddytest/caddy.ca.cer
Normal file
|
@ -0,0 +1,20 @@
|
|||
-----BEGIN CERTIFICATE-----
|
||||
MIIDSzCCAjOgAwIBAgIUfIRObjWNUA4jxQ/0x8BOCvE2Vw4wDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAwwLRWFzeS1SU0EgQ0EwHhcNMTkwODI4MTYyNTU5WhcNMjkw
|
||||
ODI1MTYyNTU5WjAWMRQwEgYDVQQDDAtFYXN5LVJTQSBDQTCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAK5m5elxhQfMp/3aVJ4JnpN9PUSz6LlP6LePAPFU
|
||||
7gqohVVFVtDkChJAG3FNkNQNlieVTja/bgH9IcC6oKbROwdY1h0MvNV8AHHigvl0
|
||||
3WuJD8g2ReVFXXwsnrPmKXCFzQyMI6TYk3m2gYrXsZOU1GLnfMRC3KAMRgE2F45t
|
||||
wOs9hqG169YJ6mM2eQjzjCHWI6S2/iUYvYxRkCOlYUbLsMD/AhgAf1plzg6LPqNx
|
||||
tdlwxZnA0ytgkmhK67HtzJu0+ovUCsMv0RwcMhsEo9T8nyFAGt9XLZ63X5WpBCTU
|
||||
ApaAUhnG0XnerjmUWb6eUWw4zev54sEfY5F3x002iQaW6cECAwEAAaOBkDCBjTAd
|
||||
BgNVHQ4EFgQU4CBUbZsS2GaNIkGRz/cBsD5ivjswUQYDVR0jBEowSIAU4CBUbZsS
|
||||
2GaNIkGRz/cBsD5ivjuhGqQYMBYxFDASBgNVBAMMC0Vhc3ktUlNBIENBghR8hE5u
|
||||
NY1QDiPFD/THwE4K8TZXDjAMBgNVHRMEBTADAQH/MAsGA1UdDwQEAwIBBjANBgkq
|
||||
hkiG9w0BAQsFAAOCAQEAKB3V4HIzoiO/Ch6WMj9bLJ2FGbpkMrcb/Eq01hT5zcfK
|
||||
D66lVS1MlK+cRL446Z2b2KDP1oFyVs+qmrmtdwrWgD+nfe2sBmmIHo9m9KygMkEO
|
||||
fG3MghGTEcS+0cTKEcoHYWYyOqQh6jnedXY8Cdm4GM1hAc9MiL3/sqV8YCVSLNnk
|
||||
oNysmr06/rZ0MCUZPGUtRmfd0heWhrfzAKw2HLgX+RAmpOE2MZqWcjvqKGyaRiaZ
|
||||
ks4nJkP6521aC2Lgp0HhCz1j8/uQ5ldoDszCnu/iro0NAsNtudTMD+YoLQxLqdle
|
||||
Ih6CW+illc2VdXwj7mn6J04yns9jfE2jRjW/yTLFuQ==
|
||||
-----END CERTIFICATE-----
|
|
@ -7,13 +7,15 @@ import (
|
|||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"io"
|
||||
"io/fs"
|
||||
"log"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/cookiejar"
|
||||
"os"
|
||||
"path"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
@ -21,9 +23,10 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/aryann/difflib"
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig"
|
||||
|
||||
caddycmd "github.com/caddyserver/caddy/v2/cmd"
|
||||
|
||||
"github.com/caddyserver/caddy/v2/caddyconfig"
|
||||
// plug in Caddy modules here
|
||||
_ "github.com/caddyserver/caddy/v2/modules/standard"
|
||||
)
|
||||
|
@ -33,13 +36,19 @@ type Defaults struct {
|
|||
// Port we expect caddy to listening on
|
||||
AdminPort int
|
||||
// Certificates we expect to be loaded before attempting to run the tests
|
||||
Certifcates []string
|
||||
Certificates []string
|
||||
// TestRequestTimeout is the time to wait for a http request to
|
||||
TestRequestTimeout time.Duration
|
||||
// LoadRequestTimeout is the time to wait for the config to be loaded against the caddy server
|
||||
LoadRequestTimeout time.Duration
|
||||
}
|
||||
|
||||
// Default testing values
|
||||
var Default = Defaults{
|
||||
AdminPort: 2019,
|
||||
Certifcates: []string{"/caddy.localhost.crt", "/caddy.localhost.key"},
|
||||
AdminPort: 2999, // different from what a real server also running on a developer's machine might be
|
||||
Certificates: []string{"/caddy.localhost.crt", "/caddy.localhost.key"},
|
||||
TestRequestTimeout: 5 * time.Second,
|
||||
LoadRequestTimeout: 5 * time.Second,
|
||||
}
|
||||
|
||||
var (
|
||||
|
@ -49,13 +58,13 @@ var (
|
|||
|
||||
// Tester represents an instance of a test client.
|
||||
type Tester struct {
|
||||
Client *http.Client
|
||||
t *testing.T
|
||||
Client *http.Client
|
||||
configLoaded bool
|
||||
t testing.TB
|
||||
}
|
||||
|
||||
// NewTester will create a new testing client with an attached cookie jar
|
||||
func NewTester(t *testing.T) *Tester {
|
||||
|
||||
func NewTester(t testing.TB) *Tester {
|
||||
jar, err := cookiejar.New(nil)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create cookiejar: %s", err)
|
||||
|
@ -65,9 +74,10 @@ func NewTester(t *testing.T) *Tester {
|
|||
Client: &http.Client{
|
||||
Transport: CreateTestingTransport(),
|
||||
Jar: jar,
|
||||
Timeout: 5 * time.Second,
|
||||
Timeout: Default.TestRequestTimeout,
|
||||
},
|
||||
t: t,
|
||||
configLoaded: false,
|
||||
t: t,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -85,47 +95,63 @@ func timeElapsed(start time.Time, name string) {
|
|||
// InitServer this will configure the server with a configurion of a specific
|
||||
// type. The configType must be either "json" or the adapter type.
|
||||
func (tc *Tester) InitServer(rawConfig string, configType string) {
|
||||
|
||||
if err := tc.initServer(rawConfig, configType); err != nil {
|
||||
tc.t.Logf("failed to load config: %s", err)
|
||||
tc.t.Fail()
|
||||
}
|
||||
if err := tc.ensureConfigRunning(rawConfig, configType); err != nil {
|
||||
tc.t.Logf("failed ensuring config is running: %s", err)
|
||||
tc.t.Fail()
|
||||
}
|
||||
}
|
||||
|
||||
// InitServer this will configure the server with a configurion of a specific
|
||||
// type. The configType must be either "json" or the adapter type.
|
||||
func (tc *Tester) initServer(rawConfig string, configType string) error {
|
||||
|
||||
if testing.Short() {
|
||||
tc.t.SkipNow()
|
||||
return nil
|
||||
}
|
||||
|
||||
err := validateTestPrerequisites()
|
||||
err := validateTestPrerequisites(tc.t)
|
||||
if err != nil {
|
||||
tc.t.Skipf("skipping tests as failed integration prerequisites. %s", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
tc.t.Cleanup(func() {
|
||||
if tc.t.Failed() {
|
||||
if tc.t.Failed() && tc.configLoaded {
|
||||
res, err := http.Get(fmt.Sprintf("http://localhost:%d/config/", Default.AdminPort))
|
||||
if err != nil {
|
||||
tc.t.Log("unable to read the current config")
|
||||
return
|
||||
}
|
||||
defer res.Body.Close()
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
body, _ := io.ReadAll(res.Body)
|
||||
|
||||
var out bytes.Buffer
|
||||
json.Indent(&out, body, "", " ")
|
||||
_ = json.Indent(&out, body, "", " ")
|
||||
tc.t.Logf("----------- failed with config -----------\n%s", out.String())
|
||||
}
|
||||
})
|
||||
|
||||
rawConfig = prependCaddyFilePath(rawConfig)
|
||||
// normalize JSON config
|
||||
if configType == "json" {
|
||||
tc.t.Logf("Before: %s", rawConfig)
|
||||
var conf any
|
||||
if err := json.Unmarshal([]byte(rawConfig), &conf); err != nil {
|
||||
return err
|
||||
}
|
||||
c, err := json.Marshal(conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rawConfig = string(c)
|
||||
tc.t.Logf("After: %s", rawConfig)
|
||||
}
|
||||
client := &http.Client{
|
||||
Timeout: time.Second * 2,
|
||||
Timeout: Default.LoadRequestTimeout,
|
||||
}
|
||||
start := time.Now()
|
||||
req, err := http.NewRequest("POST", fmt.Sprintf("http://localhost:%d/load", Default.AdminPort), strings.NewReader(rawConfig))
|
||||
|
@ -148,7 +174,7 @@ func (tc *Tester) initServer(rawConfig string, configType string) error {
|
|||
timeElapsed(start, "caddytest: config load time")
|
||||
|
||||
defer res.Body.Close()
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
body, err := io.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
tc.t.Errorf("unable to read response. %s", err)
|
||||
return err
|
||||
|
@ -158,69 +184,117 @@ func (tc *Tester) initServer(rawConfig string, configType string) error {
|
|||
return configLoadError{Response: string(body)}
|
||||
}
|
||||
|
||||
tc.configLoaded = true
|
||||
return nil
|
||||
}
|
||||
|
||||
var hasValidated bool
|
||||
var arePrerequisitesValid bool
|
||||
|
||||
func validateTestPrerequisites() error {
|
||||
|
||||
if hasValidated {
|
||||
if !arePrerequisitesValid {
|
||||
return errors.New("caddy integration prerequisites failed. see first error")
|
||||
func (tc *Tester) ensureConfigRunning(rawConfig string, configType string) error {
|
||||
expectedBytes := []byte(prependCaddyFilePath(rawConfig))
|
||||
if configType != "json" {
|
||||
adapter := caddyconfig.GetAdapter(configType)
|
||||
if adapter == nil {
|
||||
return fmt.Errorf("adapter of config type is missing: %s", configType)
|
||||
}
|
||||
return nil
|
||||
expectedBytes, _, _ = adapter.Adapt([]byte(rawConfig), nil)
|
||||
}
|
||||
|
||||
hasValidated = true
|
||||
arePrerequisitesValid = false
|
||||
var expected any
|
||||
err := json.Unmarshal(expectedBytes, &expected)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: Default.LoadRequestTimeout,
|
||||
}
|
||||
|
||||
fetchConfig := func(client *http.Client) any {
|
||||
resp, err := client.Get(fmt.Sprintf("http://localhost:%d/config/", Default.AdminPort))
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
actualBytes, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
var actual any
|
||||
err = json.Unmarshal(actualBytes, &actual)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return actual
|
||||
}
|
||||
|
||||
for retries := 10; retries > 0; retries-- {
|
||||
if reflect.DeepEqual(expected, fetchConfig(client)) {
|
||||
return nil
|
||||
}
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
tc.t.Errorf("POSTed configuration isn't active")
|
||||
return errors.New("EnsureConfigRunning: POSTed configuration isn't active")
|
||||
}
|
||||
|
||||
const initConfig = `{
|
||||
admin localhost:2999
|
||||
}
|
||||
`
|
||||
|
||||
// validateTestPrerequisites ensures the certificates are available in the
|
||||
// designated path and Caddy sub-process is running.
|
||||
func validateTestPrerequisites(t testing.TB) error {
|
||||
// check certificates are found
|
||||
for _, certName := range Default.Certifcates {
|
||||
if _, err := os.Stat(getIntegrationDir() + certName); os.IsNotExist(err) {
|
||||
for _, certName := range Default.Certificates {
|
||||
if _, err := os.Stat(getIntegrationDir() + certName); errors.Is(err, fs.ErrNotExist) {
|
||||
return fmt.Errorf("caddy integration test certificates (%s) not found", certName)
|
||||
}
|
||||
}
|
||||
|
||||
if isCaddyAdminRunning() != nil {
|
||||
// setup the init config file, and set the cleanup afterwards
|
||||
f, err := os.CreateTemp("", "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
t.Cleanup(func() {
|
||||
os.Remove(f.Name())
|
||||
})
|
||||
if _, err := f.WriteString(initConfig); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// start inprocess caddy server
|
||||
os.Args = []string{"caddy", "run"}
|
||||
os.Args = []string{"caddy", "run", "--config", f.Name(), "--adapter", "caddyfile"}
|
||||
go func() {
|
||||
caddycmd.Main()
|
||||
}()
|
||||
|
||||
// wait for caddy to start
|
||||
retries := 4
|
||||
for ; retries > 0 && isCaddyAdminRunning() != nil; retries-- {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
// wait for caddy to start serving the initial config
|
||||
for retries := 10; retries > 0 && isCaddyAdminRunning() != nil; retries-- {
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
}
|
||||
|
||||
// assert that caddy is running
|
||||
if err := isCaddyAdminRunning(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
arePrerequisitesValid = true
|
||||
return nil
|
||||
// one more time to return the error
|
||||
return isCaddyAdminRunning()
|
||||
}
|
||||
|
||||
func isCaddyAdminRunning() error {
|
||||
// assert that caddy is running
|
||||
client := &http.Client{
|
||||
Timeout: time.Second * 2,
|
||||
Timeout: Default.LoadRequestTimeout,
|
||||
}
|
||||
_, err := client.Get(fmt.Sprintf("http://localhost:%d/config/", Default.AdminPort))
|
||||
resp, err := client.Get(fmt.Sprintf("http://localhost:%d/config/", Default.AdminPort))
|
||||
if err != nil {
|
||||
return errors.New("caddy integration test caddy server not running. Expected to be listening on localhost:2019")
|
||||
return fmt.Errorf("caddy integration test caddy server not running. Expected to be listening on localhost:%d", Default.AdminPort)
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func getIntegrationDir() string {
|
||||
|
||||
_, filename, _, ok := runtime.Caller(1)
|
||||
if !ok {
|
||||
panic("unable to determine the current file path")
|
||||
|
@ -240,7 +314,6 @@ func prependCaddyFilePath(rawConfig string) string {
|
|||
|
||||
// CreateTestingTransport creates a testing transport that forces call dialing connections to happen locally
|
||||
func CreateTestingTransport() *http.Transport {
|
||||
|
||||
dialer := net.Dialer{
|
||||
Timeout: 5 * time.Second,
|
||||
KeepAlive: 5 * time.Second,
|
||||
|
@ -262,13 +335,12 @@ func CreateTestingTransport() *http.Transport {
|
|||
IdleConnTimeout: 90 * time.Second,
|
||||
TLSHandshakeTimeout: 5 * time.Second,
|
||||
ExpectContinueTimeout: 1 * time.Second,
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, //nolint:gosec
|
||||
}
|
||||
}
|
||||
|
||||
// AssertLoadError will load a config and expect an error
|
||||
func AssertLoadError(t *testing.T, rawConfig string, configType string, expectedError string) {
|
||||
|
||||
tc := NewTester(t)
|
||||
|
||||
err := tc.initServer(rawConfig, configType)
|
||||
|
@ -279,7 +351,6 @@ func AssertLoadError(t *testing.T, rawConfig string, configType string, expected
|
|||
|
||||
// AssertRedirect makes a request and asserts the redirection happens
|
||||
func (tc *Tester) AssertRedirect(requestURI string, expectedToLocation string, expectedStatusCode int) *http.Response {
|
||||
|
||||
redirectPolicyFunc := func(req *http.Request, via []*http.Request) error {
|
||||
return http.ErrUseLastResponse
|
||||
}
|
||||
|
@ -303,35 +374,45 @@ func (tc *Tester) AssertRedirect(requestURI string, expectedToLocation string, e
|
|||
if err != nil {
|
||||
tc.t.Errorf("requesting \"%s\" expected location: \"%s\" but got error: %s", requestURI, expectedToLocation, err)
|
||||
}
|
||||
|
||||
if expectedToLocation != loc.String() {
|
||||
tc.t.Errorf("requesting \"%s\" expected location: \"%s\" but got \"%s\"", requestURI, expectedToLocation, loc.String())
|
||||
if loc == nil && expectedToLocation != "" {
|
||||
tc.t.Errorf("requesting \"%s\" expected a Location header, but didn't get one", requestURI)
|
||||
}
|
||||
if loc != nil {
|
||||
if expectedToLocation != loc.String() {
|
||||
tc.t.Errorf("requesting \"%s\" expected location: \"%s\" but got \"%s\"", requestURI, expectedToLocation, loc.String())
|
||||
}
|
||||
}
|
||||
|
||||
return resp
|
||||
}
|
||||
|
||||
// AssertAdapt adapts a config and then tests it against an expected result
|
||||
func AssertAdapt(t *testing.T, rawConfig string, adapterName string, expectedResponse string) {
|
||||
|
||||
// CompareAdapt adapts a config and then compares it against an expected result
|
||||
func CompareAdapt(t testing.TB, filename, rawConfig string, adapterName string, expectedResponse string) bool {
|
||||
cfgAdapter := caddyconfig.GetAdapter(adapterName)
|
||||
if cfgAdapter == nil {
|
||||
t.Errorf("unrecognized config adapter '%s'", adapterName)
|
||||
return
|
||||
t.Logf("unrecognized config adapter '%s'", adapterName)
|
||||
return false
|
||||
}
|
||||
|
||||
options := make(map[string]interface{})
|
||||
options["pretty"] = "true"
|
||||
options := make(map[string]any)
|
||||
|
||||
result, warnings, err := cfgAdapter.Adapt([]byte(rawConfig), options)
|
||||
if err != nil {
|
||||
t.Errorf("adapting config using %s adapter: %v", adapterName, err)
|
||||
return
|
||||
t.Logf("adapting config using %s adapter: %v", adapterName, err)
|
||||
return false
|
||||
}
|
||||
|
||||
// prettify results to keep tests human-manageable
|
||||
var prettyBuf bytes.Buffer
|
||||
err = json.Indent(&prettyBuf, result, "", "\t")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
result = prettyBuf.Bytes()
|
||||
|
||||
if len(warnings) > 0 {
|
||||
for _, w := range warnings {
|
||||
t.Logf("warning: directive: %s : %s", w.Directive, w.Message)
|
||||
t.Logf("warning: %s:%d: %s: %s", filename, w.Line, w.Directive, w.Message)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -359,13 +440,22 @@ func AssertAdapt(t *testing.T, rawConfig string, adapterName string, expectedRes
|
|||
fmt.Printf(" + %s\n", d.Payload)
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// AssertAdapt adapts a config and then tests it against an expected result
|
||||
func AssertAdapt(t testing.TB, rawConfig string, adapterName string, expectedResponse string) {
|
||||
ok := CompareAdapt(t, "Caddyfile", rawConfig, adapterName, expectedResponse)
|
||||
if !ok {
|
||||
t.Fail()
|
||||
}
|
||||
}
|
||||
|
||||
// Generic request functions
|
||||
|
||||
func applyHeaders(t *testing.T, req *http.Request, requestHeaders []string) {
|
||||
func applyHeaders(t testing.TB, req *http.Request, requestHeaders []string) {
|
||||
requestContentType := ""
|
||||
for _, requestHeader := range requestHeaders {
|
||||
arr := strings.SplitAfterN(requestHeader, ":", 2)
|
||||
|
@ -385,14 +475,13 @@ func applyHeaders(t *testing.T, req *http.Request, requestHeaders []string) {
|
|||
|
||||
// AssertResponseCode will execute the request and verify the status code, returns a response for additional assertions
|
||||
func (tc *Tester) AssertResponseCode(req *http.Request, expectedStatusCode int) *http.Response {
|
||||
|
||||
resp, err := tc.Client.Do(req)
|
||||
if err != nil {
|
||||
tc.t.Fatalf("failed to call server %s", err)
|
||||
}
|
||||
|
||||
if expectedStatusCode != resp.StatusCode {
|
||||
tc.t.Errorf("requesting \"%s\" expected status code: %d but got %d", req.RequestURI, expectedStatusCode, resp.StatusCode)
|
||||
tc.t.Errorf("requesting \"%s\" expected status code: %d but got %d", req.URL.RequestURI(), expectedStatusCode, resp.StatusCode)
|
||||
}
|
||||
|
||||
return resp
|
||||
|
@ -400,18 +489,17 @@ func (tc *Tester) AssertResponseCode(req *http.Request, expectedStatusCode int)
|
|||
|
||||
// AssertResponse request a URI and assert the status code and the body contains a string
|
||||
func (tc *Tester) AssertResponse(req *http.Request, expectedStatusCode int, expectedBody string) (*http.Response, string) {
|
||||
|
||||
resp := tc.AssertResponseCode(req, expectedStatusCode)
|
||||
|
||||
defer resp.Body.Close()
|
||||
bytes, err := ioutil.ReadAll(resp.Body)
|
||||
bytes, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
tc.t.Fatalf("unable to read the response body %s", err)
|
||||
}
|
||||
|
||||
body := string(bytes)
|
||||
|
||||
if !strings.Contains(body, expectedBody) {
|
||||
if body != expectedBody {
|
||||
tc.t.Errorf("requesting \"%s\" expected response body \"%s\" but got \"%s\"", req.RequestURI, expectedBody, body)
|
||||
}
|
||||
|
||||
|
@ -422,7 +510,6 @@ func (tc *Tester) AssertResponse(req *http.Request, expectedStatusCode int, expe
|
|||
|
||||
// AssertGetResponse GET a URI and expect a statusCode and body text
|
||||
func (tc *Tester) AssertGetResponse(requestURI string, expectedStatusCode int, expectedBody string) (*http.Response, string) {
|
||||
|
||||
req, err := http.NewRequest("GET", requestURI, nil)
|
||||
if err != nil {
|
||||
tc.t.Fatalf("unable to create request %s", err)
|
||||
|
@ -433,7 +520,6 @@ func (tc *Tester) AssertGetResponse(requestURI string, expectedStatusCode int, e
|
|||
|
||||
// AssertDeleteResponse request a URI and expect a statusCode and body text
|
||||
func (tc *Tester) AssertDeleteResponse(requestURI string, expectedStatusCode int, expectedBody string) (*http.Response, string) {
|
||||
|
||||
req, err := http.NewRequest("DELETE", requestURI, nil)
|
||||
if err != nil {
|
||||
tc.t.Fatalf("unable to create request %s", err)
|
||||
|
@ -444,7 +530,6 @@ func (tc *Tester) AssertDeleteResponse(requestURI string, expectedStatusCode int
|
|||
|
||||
// AssertPostResponseBody POST to a URI and assert the response code and body
|
||||
func (tc *Tester) AssertPostResponseBody(requestURI string, requestHeaders []string, requestBody *bytes.Buffer, expectedStatusCode int, expectedBody string) (*http.Response, string) {
|
||||
|
||||
req, err := http.NewRequest("POST", requestURI, requestBody)
|
||||
if err != nil {
|
||||
tc.t.Errorf("failed to create request %s", err)
|
||||
|
@ -458,7 +543,6 @@ func (tc *Tester) AssertPostResponseBody(requestURI string, requestHeaders []str
|
|||
|
||||
// AssertPutResponseBody PUT to a URI and assert the response code and body
|
||||
func (tc *Tester) AssertPutResponseBody(requestURI string, requestHeaders []string, requestBody *bytes.Buffer, expectedStatusCode int, expectedBody string) (*http.Response, string) {
|
||||
|
||||
req, err := http.NewRequest("PUT", requestURI, requestBody)
|
||||
if err != nil {
|
||||
tc.t.Errorf("failed to create request %s", err)
|
||||
|
@ -472,7 +556,6 @@ func (tc *Tester) AssertPutResponseBody(requestURI string, requestHeaders []stri
|
|||
|
||||
// AssertPatchResponseBody PATCH to a URI and assert the response code and body
|
||||
func (tc *Tester) AssertPatchResponseBody(requestURI string, requestHeaders []string, requestBody *bytes.Buffer, expectedStatusCode int, expectedBody string) (*http.Response, string) {
|
||||
|
||||
req, err := http.NewRequest("PATCH", requestURI, requestBody)
|
||||
if err != nil {
|
||||
tc.t.Errorf("failed to create request %s", err)
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package caddytest
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
@ -31,3 +32,97 @@ func TestReplaceCertificatePaths(t *testing.T) {
|
|||
t.Error("expected redirect uri to be unchanged")
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadUnorderedJSON(t *testing.T) {
|
||||
tester := NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
"logging": {
|
||||
"logs": {
|
||||
"default": {
|
||||
"level": "DEBUG",
|
||||
"writer": {
|
||||
"output": "stdout"
|
||||
}
|
||||
},
|
||||
"sStdOutLogs": {
|
||||
"level": "DEBUG",
|
||||
"writer": {
|
||||
"output": "stdout"
|
||||
},
|
||||
"include": [
|
||||
"http.*",
|
||||
"admin.*"
|
||||
]
|
||||
},
|
||||
"sFileLogs": {
|
||||
"level": "DEBUG",
|
||||
"writer": {
|
||||
"output": "stdout"
|
||||
},
|
||||
"include": [
|
||||
"http.*",
|
||||
"admin.*"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"admin": {
|
||||
"listen": "localhost:2999"
|
||||
},
|
||||
"apps": {
|
||||
"pki": {
|
||||
"certificate_authorities" : {
|
||||
"local" : {
|
||||
"install_trust": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"http": {
|
||||
"http_port": 9080,
|
||||
"https_port": 9443,
|
||||
"servers": {
|
||||
"s_server": {
|
||||
"listen": [
|
||||
":9080"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "static_response",
|
||||
"body": "Hello"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost",
|
||||
"127.0.0.1"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"logs": {
|
||||
"default_logger_name": "sStdOutLogs",
|
||||
"logger_names": {
|
||||
"localhost": "sStdOutLogs",
|
||||
"127.0.0.1": "sFileLogs"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
`, "json")
|
||||
req, err := http.NewRequest(http.MethodGet, "http://localhost:9080/", nil)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
return
|
||||
}
|
||||
tester.AssertResponseCode(req, 200)
|
||||
}
|
||||
|
|
206
caddytest/integration/acme_test.go
Normal file
206
caddytest/integration/acme_test.go
Normal file
|
@ -0,0 +1,206 @@
|
|||
package integration
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/caddyserver/caddy/v2"
|
||||
"github.com/caddyserver/caddy/v2/caddytest"
|
||||
"github.com/mholt/acmez/v2"
|
||||
"github.com/mholt/acmez/v2/acme"
|
||||
smallstepacme "github.com/smallstep/certificates/acme"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
const acmeChallengePort = 9081
|
||||
|
||||
// Test the basic functionality of Caddy's ACME server
|
||||
func TestACMEServerWithDefaults(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
logger, err := zap.NewDevelopment()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
local_certs
|
||||
}
|
||||
acme.localhost {
|
||||
acme_server
|
||||
}
|
||||
`, "caddyfile")
|
||||
|
||||
client := acmez.Client{
|
||||
Client: &acme.Client{
|
||||
Directory: "https://acme.localhost:9443/acme/local/directory",
|
||||
HTTPClient: tester.Client,
|
||||
Logger: logger,
|
||||
},
|
||||
ChallengeSolvers: map[string]acmez.Solver{
|
||||
acme.ChallengeTypeHTTP01: &naiveHTTPSolver{logger: logger},
|
||||
},
|
||||
}
|
||||
|
||||
accountPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Errorf("generating account key: %v", err)
|
||||
}
|
||||
account := acme.Account{
|
||||
Contact: []string{"mailto:you@example.com"},
|
||||
TermsOfServiceAgreed: true,
|
||||
PrivateKey: accountPrivateKey,
|
||||
}
|
||||
account, err = client.NewAccount(ctx, account)
|
||||
if err != nil {
|
||||
t.Errorf("new account: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Every certificate needs a key.
|
||||
certPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Errorf("generating certificate key: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
certs, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{"localhost"})
|
||||
if err != nil {
|
||||
t.Errorf("obtaining certificate: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// ACME servers should usually give you the entire certificate chain
|
||||
// in PEM format, and sometimes even alternate chains! It's up to you
|
||||
// which one(s) to store and use, but whatever you do, be sure to
|
||||
// store the certificate and key somewhere safe and secure, i.e. don't
|
||||
// lose them!
|
||||
for _, cert := range certs {
|
||||
t.Logf("Certificate %q:\n%s\n\n", cert.URL, cert.ChainPEM)
|
||||
}
|
||||
}
|
||||
|
||||
func TestACMEServerWithMismatchedChallenges(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
logger := caddy.Log().Named("acmez")
|
||||
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
local_certs
|
||||
}
|
||||
acme.localhost {
|
||||
acme_server {
|
||||
challenges tls-alpn-01
|
||||
}
|
||||
}
|
||||
`, "caddyfile")
|
||||
|
||||
client := acmez.Client{
|
||||
Client: &acme.Client{
|
||||
Directory: "https://acme.localhost:9443/acme/local/directory",
|
||||
HTTPClient: tester.Client,
|
||||
Logger: logger,
|
||||
},
|
||||
ChallengeSolvers: map[string]acmez.Solver{
|
||||
acme.ChallengeTypeHTTP01: &naiveHTTPSolver{logger: logger},
|
||||
},
|
||||
}
|
||||
|
||||
accountPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Errorf("generating account key: %v", err)
|
||||
}
|
||||
account := acme.Account{
|
||||
Contact: []string{"mailto:you@example.com"},
|
||||
TermsOfServiceAgreed: true,
|
||||
PrivateKey: accountPrivateKey,
|
||||
}
|
||||
account, err = client.NewAccount(ctx, account)
|
||||
if err != nil {
|
||||
t.Errorf("new account: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Every certificate needs a key.
|
||||
certPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Errorf("generating certificate key: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
certs, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{"localhost"})
|
||||
if len(certs) > 0 {
|
||||
t.Errorf("expected '0' certificates, but received '%d'", len(certs))
|
||||
}
|
||||
if err == nil {
|
||||
t.Error("expected errors, but received none")
|
||||
}
|
||||
const expectedErrMsg = "no solvers available for remaining challenges (configured=[http-01] offered=[tls-alpn-01] remaining=[tls-alpn-01])"
|
||||
if !strings.Contains(err.Error(), expectedErrMsg) {
|
||||
t.Errorf(`received error message does not match expectation: expected="%s" received="%s"`, expectedErrMsg, err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// naiveHTTPSolver is a no-op acmez.Solver for example purposes only.
|
||||
type naiveHTTPSolver struct {
|
||||
srv *http.Server
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func (s *naiveHTTPSolver) Present(ctx context.Context, challenge acme.Challenge) error {
|
||||
smallstepacme.InsecurePortHTTP01 = acmeChallengePort
|
||||
s.srv = &http.Server{
|
||||
Addr: fmt.Sprintf(":%d", acmeChallengePort),
|
||||
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
host, _, err := net.SplitHostPort(r.Host)
|
||||
if err != nil {
|
||||
host = r.Host
|
||||
}
|
||||
s.logger.Info("received request on challenge server", zap.String("path", r.URL.Path))
|
||||
if r.Method == "GET" && r.URL.Path == challenge.HTTP01ResourcePath() && strings.EqualFold(host, challenge.Identifier.Value) {
|
||||
w.Header().Add("Content-Type", "text/plain")
|
||||
w.Write([]byte(challenge.KeyAuthorization))
|
||||
r.Close = true
|
||||
s.logger.Info("served key authentication",
|
||||
zap.String("identifier", challenge.Identifier.Value),
|
||||
zap.String("challenge", "http-01"),
|
||||
zap.String("remote", r.RemoteAddr),
|
||||
)
|
||||
}
|
||||
}),
|
||||
}
|
||||
l, err := net.Listen("tcp", fmt.Sprintf(":%d", acmeChallengePort))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.logger.Info("present challenge", zap.Any("challenge", challenge))
|
||||
go s.srv.Serve(l)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s naiveHTTPSolver) CleanUp(ctx context.Context, challenge acme.Challenge) error {
|
||||
smallstepacme.InsecurePortHTTP01 = 0
|
||||
s.logger.Info("cleanup", zap.Any("challenge", challenge))
|
||||
if s.srv != nil {
|
||||
s.srv.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
204
caddytest/integration/acmeserver_test.go
Normal file
204
caddytest/integration/acmeserver_test.go
Normal file
|
@ -0,0 +1,204 @@
|
|||
package integration
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/caddyserver/caddy/v2/caddytest"
|
||||
"github.com/mholt/acmez/v2"
|
||||
"github.com/mholt/acmez/v2/acme"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
func TestACMEServerDirectory(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
local_certs
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
pki {
|
||||
ca local {
|
||||
name "Caddy Local Authority"
|
||||
}
|
||||
}
|
||||
}
|
||||
acme.localhost:9443 {
|
||||
acme_server
|
||||
}
|
||||
`, "caddyfile")
|
||||
tester.AssertGetResponse(
|
||||
"https://acme.localhost:9443/acme/local/directory",
|
||||
200,
|
||||
`{"newNonce":"https://acme.localhost:9443/acme/local/new-nonce","newAccount":"https://acme.localhost:9443/acme/local/new-account","newOrder":"https://acme.localhost:9443/acme/local/new-order","revokeCert":"https://acme.localhost:9443/acme/local/revoke-cert","keyChange":"https://acme.localhost:9443/acme/local/key-change"}
|
||||
`)
|
||||
}
|
||||
|
||||
func TestACMEServerAllowPolicy(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
local_certs
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
pki {
|
||||
ca local {
|
||||
name "Caddy Local Authority"
|
||||
}
|
||||
}
|
||||
}
|
||||
acme.localhost {
|
||||
acme_server {
|
||||
challenges http-01
|
||||
allow {
|
||||
domains localhost
|
||||
}
|
||||
}
|
||||
}
|
||||
`, "caddyfile")
|
||||
|
||||
ctx := context.Background()
|
||||
logger, err := zap.NewDevelopment()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
client := acmez.Client{
|
||||
Client: &acme.Client{
|
||||
Directory: "https://acme.localhost:9443/acme/local/directory",
|
||||
HTTPClient: tester.Client,
|
||||
Logger: logger,
|
||||
},
|
||||
ChallengeSolvers: map[string]acmez.Solver{
|
||||
acme.ChallengeTypeHTTP01: &naiveHTTPSolver{logger: logger},
|
||||
},
|
||||
}
|
||||
|
||||
accountPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Errorf("generating account key: %v", err)
|
||||
}
|
||||
account := acme.Account{
|
||||
Contact: []string{"mailto:you@example.com"},
|
||||
TermsOfServiceAgreed: true,
|
||||
PrivateKey: accountPrivateKey,
|
||||
}
|
||||
account, err = client.NewAccount(ctx, account)
|
||||
if err != nil {
|
||||
t.Errorf("new account: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Every certificate needs a key.
|
||||
certPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Errorf("generating certificate key: %v", err)
|
||||
return
|
||||
}
|
||||
{
|
||||
certs, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{"localhost"})
|
||||
if err != nil {
|
||||
t.Errorf("obtaining certificate for allowed domain: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// ACME servers should usually give you the entire certificate chain
|
||||
// in PEM format, and sometimes even alternate chains! It's up to you
|
||||
// which one(s) to store and use, but whatever you do, be sure to
|
||||
// store the certificate and key somewhere safe and secure, i.e. don't
|
||||
// lose them!
|
||||
for _, cert := range certs {
|
||||
t.Logf("Certificate %q:\n%s\n\n", cert.URL, cert.ChainPEM)
|
||||
}
|
||||
}
|
||||
{
|
||||
_, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{"not-matching.localhost"})
|
||||
if err == nil {
|
||||
t.Errorf("obtaining certificate for 'not-matching.localhost' domain")
|
||||
} else if err != nil && !strings.Contains(err.Error(), "urn:ietf:params:acme:error:rejectedIdentifier") {
|
||||
t.Logf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestACMEServerDenyPolicy(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
local_certs
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
pki {
|
||||
ca local {
|
||||
name "Caddy Local Authority"
|
||||
}
|
||||
}
|
||||
}
|
||||
acme.localhost {
|
||||
acme_server {
|
||||
deny {
|
||||
domains deny.localhost
|
||||
}
|
||||
}
|
||||
}
|
||||
`, "caddyfile")
|
||||
|
||||
ctx := context.Background()
|
||||
logger, err := zap.NewDevelopment()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
|
||||
client := acmez.Client{
|
||||
Client: &acme.Client{
|
||||
Directory: "https://acme.localhost:9443/acme/local/directory",
|
||||
HTTPClient: tester.Client,
|
||||
Logger: logger,
|
||||
},
|
||||
ChallengeSolvers: map[string]acmez.Solver{
|
||||
acme.ChallengeTypeHTTP01: &naiveHTTPSolver{logger: logger},
|
||||
},
|
||||
}
|
||||
|
||||
accountPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Errorf("generating account key: %v", err)
|
||||
}
|
||||
account := acme.Account{
|
||||
Contact: []string{"mailto:you@example.com"},
|
||||
TermsOfServiceAgreed: true,
|
||||
PrivateKey: accountPrivateKey,
|
||||
}
|
||||
account, err = client.NewAccount(ctx, account)
|
||||
if err != nil {
|
||||
t.Errorf("new account: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Every certificate needs a key.
|
||||
certPrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
|
||||
if err != nil {
|
||||
t.Errorf("generating certificate key: %v", err)
|
||||
return
|
||||
}
|
||||
{
|
||||
_, err := client.ObtainCertificateForSANs(ctx, account, certPrivateKey, []string{"deny.localhost"})
|
||||
if err == nil {
|
||||
t.Errorf("obtaining certificate for 'deny.localhost' domain")
|
||||
} else if err != nil && !strings.Contains(err.Error(), "urn:ietf:params:acme:error:rejectedIdentifier") {
|
||||
t.Logf("unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
145
caddytest/integration/autohttps_test.go
Normal file
145
caddytest/integration/autohttps_test.go
Normal file
|
@ -0,0 +1,145 @@
|
|||
package integration
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"testing"
|
||||
|
||||
"github.com/caddyserver/caddy/v2/caddytest"
|
||||
)
|
||||
|
||||
func TestAutoHTTPtoHTTPSRedirectsImplicitPort(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
admin localhost:2999
|
||||
skip_install_trust
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
}
|
||||
localhost
|
||||
respond "Yahaha! You found me!"
|
||||
`, "caddyfile")
|
||||
|
||||
tester.AssertRedirect("http://localhost:9080/", "https://localhost/", http.StatusPermanentRedirect)
|
||||
}
|
||||
|
||||
func TestAutoHTTPtoHTTPSRedirectsExplicitPortSameAsHTTPSPort(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
}
|
||||
localhost:9443
|
||||
respond "Yahaha! You found me!"
|
||||
`, "caddyfile")
|
||||
|
||||
tester.AssertRedirect("http://localhost:9080/", "https://localhost/", http.StatusPermanentRedirect)
|
||||
}
|
||||
|
||||
func TestAutoHTTPtoHTTPSRedirectsExplicitPortDifferentFromHTTPSPort(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
}
|
||||
localhost:1234
|
||||
respond "Yahaha! You found me!"
|
||||
`, "caddyfile")
|
||||
|
||||
tester.AssertRedirect("http://localhost:9080/", "https://localhost:1234/", http.StatusPermanentRedirect)
|
||||
}
|
||||
|
||||
func TestAutoHTTPRedirectsWithHTTPListenerFirstInAddresses(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
"admin": {
|
||||
"listen": "localhost:2999"
|
||||
},
|
||||
"apps": {
|
||||
"http": {
|
||||
"http_port": 9080,
|
||||
"https_port": 9443,
|
||||
"servers": {
|
||||
"ingress_server": {
|
||||
"listen": [
|
||||
":9080",
|
||||
":9443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": ["localhost"]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"pki": {
|
||||
"certificate_authorities": {
|
||||
"local": {
|
||||
"install_trust": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
`, "json")
|
||||
tester.AssertRedirect("http://localhost:9080/", "https://localhost/", http.StatusPermanentRedirect)
|
||||
}
|
||||
|
||||
func TestAutoHTTPRedirectsInsertedBeforeUserDefinedCatchAll(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
local_certs
|
||||
}
|
||||
http://:9080 {
|
||||
respond "Foo"
|
||||
}
|
||||
http://baz.localhost:9080 {
|
||||
respond "Baz"
|
||||
}
|
||||
bar.localhost {
|
||||
respond "Bar"
|
||||
}
|
||||
`, "caddyfile")
|
||||
tester.AssertRedirect("http://bar.localhost:9080/", "https://bar.localhost/", http.StatusPermanentRedirect)
|
||||
tester.AssertGetResponse("http://foo.localhost:9080/", 200, "Foo")
|
||||
tester.AssertGetResponse("http://baz.localhost:9080/", 200, "Baz")
|
||||
}
|
||||
|
||||
func TestAutoHTTPRedirectsInsertedBeforeUserDefinedCatchAllWithNoExplicitHTTPSite(t *testing.T) {
|
||||
tester := caddytest.NewTester(t)
|
||||
tester.InitServer(`
|
||||
{
|
||||
skip_install_trust
|
||||
admin localhost:2999
|
||||
http_port 9080
|
||||
https_port 9443
|
||||
local_certs
|
||||
}
|
||||
http://:9080 {
|
||||
respond "Foo"
|
||||
}
|
||||
bar.localhost {
|
||||
respond "Bar"
|
||||
}
|
||||
`, "caddyfile")
|
||||
tester.AssertRedirect("http://bar.localhost:9080/", "https://bar.localhost/", http.StatusPermanentRedirect)
|
||||
tester.AssertGetResponse("http://foo.localhost:9080/", 200, "Foo")
|
||||
tester.AssertGetResponse("http://baz.localhost:9080/", 200, "Foo")
|
||||
}
|
|
@ -0,0 +1,65 @@
|
|||
{
|
||||
pki {
|
||||
ca custom-ca {
|
||||
name "Custom CA"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
acme.example.com {
|
||||
acme_server {
|
||||
ca custom-ca
|
||||
challenges dns-01
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"acme.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"ca": "custom-ca",
|
||||
"challenges": [
|
||||
"dns-01"
|
||||
],
|
||||
"handler": "acme_server"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"pki": {
|
||||
"certificate_authorities": {
|
||||
"custom-ca": {
|
||||
"name": "Custom CA"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,62 @@
|
|||
{
|
||||
pki {
|
||||
ca custom-ca {
|
||||
name "Custom CA"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
acme.example.com {
|
||||
acme_server {
|
||||
ca custom-ca
|
||||
challenges
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"acme.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"ca": "custom-ca",
|
||||
"handler": "acme_server"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"pki": {
|
||||
"certificate_authorities": {
|
||||
"custom-ca": {
|
||||
"name": "Custom CA"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,108 @@
|
|||
{
|
||||
pki {
|
||||
ca internal {
|
||||
name "Internal"
|
||||
root_cn "Internal Root Cert"
|
||||
intermediate_cn "Internal Intermediate Cert"
|
||||
}
|
||||
ca internal-long-lived {
|
||||
name "Long-lived"
|
||||
root_cn "Internal Root Cert 2"
|
||||
intermediate_cn "Internal Intermediate Cert 2"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
acme-internal.example.com {
|
||||
acme_server {
|
||||
ca internal
|
||||
}
|
||||
}
|
||||
|
||||
acme-long-lived.example.com {
|
||||
acme_server {
|
||||
ca internal-long-lived
|
||||
lifetime 7d
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"acme-long-lived.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"ca": "internal-long-lived",
|
||||
"handler": "acme_server",
|
||||
"lifetime": 604800000000000
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"acme-internal.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"ca": "internal",
|
||||
"handler": "acme_server"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"pki": {
|
||||
"certificate_authorities": {
|
||||
"internal": {
|
||||
"name": "Internal",
|
||||
"root_common_name": "Internal Root Cert",
|
||||
"intermediate_common_name": "Internal Intermediate Cert"
|
||||
},
|
||||
"internal-long-lived": {
|
||||
"name": "Long-lived",
|
||||
"root_common_name": "Internal Root Cert 2",
|
||||
"intermediate_common_name": "Internal Intermediate Cert 2"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,66 @@
|
|||
{
|
||||
pki {
|
||||
ca custom-ca {
|
||||
name "Custom CA"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
acme.example.com {
|
||||
acme_server {
|
||||
ca custom-ca
|
||||
challenges dns-01 http-01
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"acme.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"ca": "custom-ca",
|
||||
"challenges": [
|
||||
"dns-01",
|
||||
"http-01"
|
||||
],
|
||||
"handler": "acme_server"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"pki": {
|
||||
"certificate_authorities": {
|
||||
"custom-ca": {
|
||||
"name": "Custom CA"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,67 @@
|
|||
{
|
||||
pki {
|
||||
ca internal {
|
||||
name "Internal"
|
||||
root_cn "Internal Root Cert"
|
||||
intermediate_cn "Internal Intermediate Cert"
|
||||
}
|
||||
}
|
||||
}
|
||||
acme.example.com {
|
||||
acme_server {
|
||||
ca internal
|
||||
sign_with_root
|
||||
}
|
||||
}
|
||||
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"acme.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"ca": "internal",
|
||||
"handler": "acme_server",
|
||||
"sign_with_root": true
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"pki": {
|
||||
"certificate_authorities": {
|
||||
"internal": {
|
||||
"name": "Internal",
|
||||
"root_common_name": "Internal Root Cert",
|
||||
"intermediate_common_name": "Internal Intermediate Cert"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,34 @@
|
|||
{
|
||||
auto_https disable_redirects
|
||||
}
|
||||
|
||||
localhost
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"automatic_https": {
|
||||
"disable_redirects": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,34 @@
|
|||
{
|
||||
auto_https ignore_loaded_certs
|
||||
}
|
||||
|
||||
localhost
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"automatic_https": {
|
||||
"ignore_loaded_certificates": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,37 @@
|
|||
{
|
||||
auto_https off
|
||||
}
|
||||
|
||||
localhost
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"tls_connection_policies": [
|
||||
{}
|
||||
],
|
||||
"automatic_https": {
|
||||
"disable": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,109 @@
|
|||
{
|
||||
auto_https prefer_wildcard
|
||||
}
|
||||
|
||||
*.example.com {
|
||||
tls {
|
||||
dns mock
|
||||
}
|
||||
respond "fallback"
|
||||
}
|
||||
|
||||
foo.example.com {
|
||||
respond "foo"
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"foo.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "foo",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"*.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "fallback",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"automatic_https": {
|
||||
"skip_certificates": [
|
||||
"foo.example.com"
|
||||
],
|
||||
"prefer_wildcard": true
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"tls": {
|
||||
"automation": {
|
||||
"policies": [
|
||||
{
|
||||
"subjects": [
|
||||
"*.example.com"
|
||||
],
|
||||
"issuers": [
|
||||
{
|
||||
"challenges": {
|
||||
"dns": {
|
||||
"provider": {
|
||||
"name": "mock"
|
||||
}
|
||||
}
|
||||
},
|
||||
"module": "acme"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,268 @@
|
|||
{
|
||||
auto_https prefer_wildcard
|
||||
}
|
||||
|
||||
# Covers two domains
|
||||
*.one.example.com {
|
||||
tls {
|
||||
dns mock
|
||||
}
|
||||
respond "one fallback"
|
||||
}
|
||||
|
||||
# Is covered, should not get its own AP
|
||||
foo.one.example.com {
|
||||
respond "foo one"
|
||||
}
|
||||
|
||||
# This one has its own tls config so it doesn't get covered (escape hatch)
|
||||
bar.one.example.com {
|
||||
respond "bar one"
|
||||
tls bar@bar.com
|
||||
}
|
||||
|
||||
# Covers nothing but AP gets consolidated with the first
|
||||
*.two.example.com {
|
||||
tls {
|
||||
dns mock
|
||||
}
|
||||
respond "two fallback"
|
||||
}
|
||||
|
||||
# Is HTTP so it should not cover
|
||||
http://*.three.example.com {
|
||||
respond "three fallback"
|
||||
}
|
||||
|
||||
# Has no wildcard coverage so it gets an AP
|
||||
foo.three.example.com {
|
||||
respond "foo three"
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"foo.three.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "foo three",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"foo.one.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "foo one",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"bar.one.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "bar one",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"*.one.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "one fallback",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"*.two.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "two fallback",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"automatic_https": {
|
||||
"skip_certificates": [
|
||||
"foo.one.example.com",
|
||||
"bar.one.example.com"
|
||||
],
|
||||
"prefer_wildcard": true
|
||||
}
|
||||
},
|
||||
"srv1": {
|
||||
"listen": [
|
||||
":80"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"*.three.example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "three fallback",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"automatic_https": {
|
||||
"prefer_wildcard": true
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"tls": {
|
||||
"automation": {
|
||||
"policies": [
|
||||
{
|
||||
"subjects": [
|
||||
"foo.three.example.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"subjects": [
|
||||
"bar.one.example.com"
|
||||
],
|
||||
"issuers": [
|
||||
{
|
||||
"email": "bar@bar.com",
|
||||
"module": "acme"
|
||||
},
|
||||
{
|
||||
"ca": "https://acme.zerossl.com/v2/DV90",
|
||||
"email": "bar@bar.com",
|
||||
"module": "acme"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"subjects": [
|
||||
"*.one.example.com",
|
||||
"*.two.example.com"
|
||||
],
|
||||
"issuers": [
|
||||
{
|
||||
"challenges": {
|
||||
"dns": {
|
||||
"provider": {
|
||||
"name": "mock"
|
||||
}
|
||||
}
|
||||
},
|
||||
"module": "acme"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,142 @@
|
|||
{
|
||||
auto_https disable_redirects
|
||||
admin off
|
||||
}
|
||||
|
||||
http://localhost {
|
||||
bind fd/{env.CADDY_HTTP_FD} {
|
||||
protocols h1
|
||||
}
|
||||
log
|
||||
respond "Hello, HTTP!"
|
||||
}
|
||||
|
||||
https://localhost {
|
||||
bind fd/{env.CADDY_HTTPS_FD} {
|
||||
protocols h1 h2
|
||||
}
|
||||
bind fdgram/{env.CADDY_HTTP3_FD} {
|
||||
protocols h3
|
||||
}
|
||||
log
|
||||
respond "Hello, HTTPS!"
|
||||
}
|
||||
----------
|
||||
{
|
||||
"admin": {
|
||||
"disabled": true
|
||||
},
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
"fd/{env.CADDY_HTTPS_FD}",
|
||||
"fdgram/{env.CADDY_HTTP3_FD}"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Hello, HTTPS!",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"automatic_https": {
|
||||
"disable_redirects": true
|
||||
},
|
||||
"logs": {
|
||||
"logger_names": {
|
||||
"localhost": [
|
||||
""
|
||||
]
|
||||
}
|
||||
},
|
||||
"listen_protocols": [
|
||||
[
|
||||
"h1",
|
||||
"h2"
|
||||
],
|
||||
[
|
||||
"h3"
|
||||
]
|
||||
]
|
||||
},
|
||||
"srv1": {
|
||||
"automatic_https": {
|
||||
"disable_redirects": true
|
||||
}
|
||||
},
|
||||
"srv2": {
|
||||
"listen": [
|
||||
"fd/{env.CADDY_HTTP_FD}"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Hello, HTTP!",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"automatic_https": {
|
||||
"disable_redirects": true,
|
||||
"skip": [
|
||||
"localhost"
|
||||
]
|
||||
},
|
||||
"logs": {
|
||||
"logger_names": {
|
||||
"localhost": [
|
||||
""
|
||||
]
|
||||
}
|
||||
},
|
||||
"listen_protocols": [
|
||||
[
|
||||
"h1"
|
||||
]
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,29 @@
|
|||
example.com {
|
||||
bind tcp6/[::]
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
"tcp6/[::]:443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,37 @@
|
|||
:8443 {
|
||||
tls internal {
|
||||
on_demand
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":8443"
|
||||
],
|
||||
"tls_connection_policies": [
|
||||
{}
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"tls": {
|
||||
"automation": {
|
||||
"policies": [
|
||||
{
|
||||
"issuers": [
|
||||
{
|
||||
"module": "internal"
|
||||
}
|
||||
],
|
||||
"on_demand": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
:80
|
||||
|
||||
# All the options
|
||||
encode gzip zstd {
|
||||
minimum_length 256
|
||||
match {
|
||||
status 2xx 4xx 500
|
||||
header Content-Type text/*
|
||||
header Content-Type application/json*
|
||||
header Content-Type application/javascript*
|
||||
header Content-Type application/xhtml+xml*
|
||||
header Content-Type application/atom+xml*
|
||||
header Content-Type application/rss+xml*
|
||||
header Content-Type application/wasm*
|
||||
header Content-Type image/svg+xml*
|
||||
}
|
||||
}
|
||||
|
||||
# Long way with a block for each encoding
|
||||
encode {
|
||||
zstd
|
||||
gzip 5
|
||||
}
|
||||
|
||||
encode
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":80"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"encodings": {
|
||||
"gzip": {},
|
||||
"zstd": {}
|
||||
},
|
||||
"handler": "encode",
|
||||
"match": {
|
||||
"headers": {
|
||||
"Content-Type": [
|
||||
"text/*",
|
||||
"application/json*",
|
||||
"application/javascript*",
|
||||
"application/xhtml+xml*",
|
||||
"application/atom+xml*",
|
||||
"application/rss+xml*",
|
||||
"application/wasm*",
|
||||
"image/svg+xml*"
|
||||
]
|
||||
},
|
||||
"status_code": [
|
||||
2,
|
||||
4,
|
||||
500
|
||||
]
|
||||
},
|
||||
"minimum_length": 256,
|
||||
"prefer": [
|
||||
"gzip",
|
||||
"zstd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"encodings": {
|
||||
"gzip": {
|
||||
"level": 5
|
||||
},
|
||||
"zstd": {}
|
||||
},
|
||||
"handler": "encode",
|
||||
"prefer": [
|
||||
"zstd",
|
||||
"gzip"
|
||||
]
|
||||
},
|
||||
{
|
||||
"encodings": {
|
||||
"gzip": {},
|
||||
"zstd": {}
|
||||
},
|
||||
"handler": "encode",
|
||||
"prefer": [
|
||||
"zstd",
|
||||
"gzip"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,138 @@
|
|||
example.com {
|
||||
root * /srv
|
||||
|
||||
# Trigger errors for certain paths
|
||||
error /private* "Unauthorized" 403
|
||||
error /hidden* "Not found" 404
|
||||
|
||||
# Handle the error by serving an HTML page
|
||||
handle_errors {
|
||||
rewrite * /{http.error.status_code}.html
|
||||
file_server
|
||||
}
|
||||
|
||||
file_server
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "vars",
|
||||
"root": "/srv"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Unauthorized",
|
||||
"handler": "error",
|
||||
"status_code": 403
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/private*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Not found",
|
||||
"handler": "error",
|
||||
"status_code": 404
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/hidden*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "file_server",
|
||||
"hide": [
|
||||
"./Caddyfile"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"errors": {
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"group": "group0",
|
||||
"handle": [
|
||||
{
|
||||
"handler": "rewrite",
|
||||
"uri": "/{http.error.status_code}.html"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "file_server",
|
||||
"hide": [
|
||||
"./Caddyfile"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,245 @@
|
|||
foo.localhost {
|
||||
root * /srv
|
||||
error /private* "Unauthorized" 410
|
||||
error /fivehundred* "Internal Server Error" 500
|
||||
|
||||
handle_errors 5xx {
|
||||
respond "Error In range [500 .. 599]"
|
||||
}
|
||||
handle_errors 410 {
|
||||
respond "404 or 410 error"
|
||||
}
|
||||
}
|
||||
|
||||
bar.localhost {
|
||||
root * /srv
|
||||
error /private* "Unauthorized" 410
|
||||
error /fivehundred* "Internal Server Error" 500
|
||||
|
||||
handle_errors 5xx {
|
||||
respond "Error In range [500 .. 599] from second site"
|
||||
}
|
||||
handle_errors 410 {
|
||||
respond "404 or 410 error from second site"
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"foo.localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "vars",
|
||||
"root": "/srv"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Internal Server Error",
|
||||
"handler": "error",
|
||||
"status_code": 500
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/fivehundred*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Unauthorized",
|
||||
"handler": "error",
|
||||
"status_code": 410
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/private*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"bar.localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "vars",
|
||||
"root": "/srv"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Internal Server Error",
|
||||
"handler": "error",
|
||||
"status_code": 500
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/fivehundred*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Unauthorized",
|
||||
"handler": "error",
|
||||
"status_code": 410
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/private*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"errors": {
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"foo.localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "404 or 410 error",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} in [410]"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Error In range [500 .. 599]",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} \u003e= 500 \u0026\u0026 {http.error.status_code} \u003c= 599"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
},
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"bar.localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "404 or 410 error from second site",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} in [410]"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Error In range [500 .. 599] from second site",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} \u003e= 500 \u0026\u0026 {http.error.status_code} \u003c= 599"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,120 @@
|
|||
{
|
||||
http_port 3010
|
||||
}
|
||||
localhost:3010 {
|
||||
root * /srv
|
||||
error /private* "Unauthorized" 410
|
||||
error /hidden* "Not found" 404
|
||||
|
||||
handle_errors 4xx {
|
||||
respond "Error in the [400 .. 499] range"
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"http_port": 3010,
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":3010"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "vars",
|
||||
"root": "/srv"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Unauthorized",
|
||||
"handler": "error",
|
||||
"status_code": 410
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/private*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Not found",
|
||||
"handler": "error",
|
||||
"status_code": 404
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/hidden*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"errors": {
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Error in the [400 .. 499] range",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} \u003e= 400 \u0026\u0026 {http.error.status_code} \u003c= 499"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,153 @@
|
|||
{
|
||||
http_port 2099
|
||||
}
|
||||
localhost:2099 {
|
||||
root * /srv
|
||||
error /private* "Unauthorized" 410
|
||||
error /threehundred* "Moved Permanently" 301
|
||||
error /internalerr* "Internal Server Error" 500
|
||||
|
||||
handle_errors 500 3xx {
|
||||
respond "Error code is equal to 500 or in the [300..399] range"
|
||||
}
|
||||
handle_errors 4xx {
|
||||
respond "Error in the [400 .. 499] range"
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"http_port": 2099,
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":2099"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "vars",
|
||||
"root": "/srv"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Moved Permanently",
|
||||
"handler": "error",
|
||||
"status_code": 301
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/threehundred*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Internal Server Error",
|
||||
"handler": "error",
|
||||
"status_code": 500
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/internalerr*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Unauthorized",
|
||||
"handler": "error",
|
||||
"status_code": 410
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/private*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"errors": {
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Error in the [400 .. 499] range",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} \u003e= 400 \u0026\u0026 {http.error.status_code} \u003c= 499"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Error code is equal to 500 or in the [300..399] range",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} \u003e= 300 \u0026\u0026 {http.error.status_code} \u003c= 399 || {http.error.status_code} in [500]"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,120 @@
|
|||
{
|
||||
http_port 3010
|
||||
}
|
||||
localhost:3010 {
|
||||
root * /srv
|
||||
error /private* "Unauthorized" 410
|
||||
error /hidden* "Not found" 404
|
||||
|
||||
handle_errors 404 410 {
|
||||
respond "404 or 410 error"
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"http_port": 3010,
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":3010"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "vars",
|
||||
"root": "/srv"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Unauthorized",
|
||||
"handler": "error",
|
||||
"status_code": 410
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/private*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Not found",
|
||||
"handler": "error",
|
||||
"status_code": 404
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/hidden*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"errors": {
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "404 or 410 error",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} in [404, 410]"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
148
caddytest/integration/caddyfile_adapt/error_sort.caddyfiletest
Normal file
148
caddytest/integration/caddyfile_adapt/error_sort.caddyfiletest
Normal file
|
@ -0,0 +1,148 @@
|
|||
{
|
||||
http_port 2099
|
||||
}
|
||||
localhost:2099 {
|
||||
root * /srv
|
||||
error /private* "Unauthorized" 410
|
||||
error /hidden* "Not found" 404
|
||||
error /internalerr* "Internal Server Error" 500
|
||||
|
||||
handle_errors {
|
||||
respond "Fallback route: code outside the [400..499] range"
|
||||
}
|
||||
handle_errors 4xx {
|
||||
respond "Error in the [400 .. 499] range"
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"http_port": 2099,
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":2099"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "vars",
|
||||
"root": "/srv"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Internal Server Error",
|
||||
"handler": "error",
|
||||
"status_code": 500
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/internalerr*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Unauthorized",
|
||||
"handler": "error",
|
||||
"status_code": 410
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/private*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"error": "Not found",
|
||||
"handler": "error",
|
||||
"status_code": 404
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"path": [
|
||||
"/hidden*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
],
|
||||
"errors": {
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"localhost"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Error in the [400 .. 499] range",
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} \u003e= 400 \u0026\u0026 {http.error.status_code} \u003c= 499"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"body": "Fallback route: code outside the [400..499] range",
|
||||
"handler": "static_response"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,162 @@
|
|||
(snippet) {
|
||||
@g `{http.error.status_code} == 404`
|
||||
}
|
||||
|
||||
example.com
|
||||
|
||||
@a expression {http.error.status_code} == 400
|
||||
abort @a
|
||||
|
||||
@b expression {http.error.status_code} == "401"
|
||||
abort @b
|
||||
|
||||
@c expression {http.error.status_code} == `402`
|
||||
abort @c
|
||||
|
||||
@d expression "{http.error.status_code} == 403"
|
||||
abort @d
|
||||
|
||||
@e expression `{http.error.status_code} == 404`
|
||||
abort @e
|
||||
|
||||
@f `{http.error.status_code} == 404`
|
||||
abort @f
|
||||
|
||||
import snippet
|
||||
abort @g
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":443"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": [
|
||||
{
|
||||
"host": [
|
||||
"example.com"
|
||||
]
|
||||
}
|
||||
],
|
||||
"handle": [
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"abort": true,
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} == 400"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"abort": true,
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} == \"401\""
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"abort": true,
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": "{http.error.status_code} == `402`"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"abort": true,
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": {
|
||||
"expr": "{http.error.status_code} == 403",
|
||||
"name": "d"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"abort": true,
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": {
|
||||
"expr": "{http.error.status_code} == 404",
|
||||
"name": "e"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"abort": true,
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": {
|
||||
"expr": "{http.error.status_code} == 404",
|
||||
"name": "f"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"abort": true,
|
||||
"handler": "static_response"
|
||||
}
|
||||
],
|
||||
"match": [
|
||||
{
|
||||
"expression": {
|
||||
"expr": "{http.error.status_code} == 404",
|
||||
"name": "g"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"terminal": true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,32 @@
|
|||
:80
|
||||
|
||||
file_server {
|
||||
disable_canonical_uris
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":80"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"canonical_uris": false,
|
||||
"handler": "file_server",
|
||||
"hide": [
|
||||
"./Caddyfile"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
:8080 {
|
||||
root * ./
|
||||
file_server {
|
||||
etag_file_extensions .b3sum .sha256
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":8080"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "vars",
|
||||
"root": "./"
|
||||
},
|
||||
{
|
||||
"etag_file_extensions": [
|
||||
".b3sum",
|
||||
".sha256"
|
||||
],
|
||||
"handler": "file_server",
|
||||
"hide": [
|
||||
"./Caddyfile"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,36 @@
|
|||
:80
|
||||
|
||||
file_server {
|
||||
browse {
|
||||
file_limit 4000
|
||||
}
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":80"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"browse": {
|
||||
"file_limit": 4000
|
||||
},
|
||||
"handler": "file_server",
|
||||
"hide": [
|
||||
"./Caddyfile"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,32 @@
|
|||
:80
|
||||
|
||||
file_server {
|
||||
pass_thru
|
||||
}
|
||||
----------
|
||||
{
|
||||
"apps": {
|
||||
"http": {
|
||||
"servers": {
|
||||
"srv0": {
|
||||
"listen": [
|
||||
":80"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"handle": [
|
||||
{
|
||||
"handler": "file_server",
|
||||
"hide": [
|
||||
"./Caddyfile"
|
||||
],
|
||||
"pass_thru": true
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue