2023-09-15 17:01:23 +03:00
// Package webadmin is a web app for the mox administrator for viewing and changing
// the configuration, like creating/removing accounts, viewing DMARC and TLS
// reports, check DNS records for a domain, change the webserver configuration,
// etc.
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
package webadmin
2023-01-30 16:27:06 +03:00
import (
"bufio"
"bytes"
"context"
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
"crypto"
2023-01-30 16:27:06 +03:00
"crypto/ed25519"
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
cryptorand "crypto/rand"
2023-01-30 16:27:06 +03:00
"crypto/rsa"
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
"crypto/sha256"
2023-01-30 16:27:06 +03:00
"crypto/tls"
"crypto/x509"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
2024-02-08 16:49:01 +03:00
"log/slog"
2023-01-30 16:27:06 +03:00
"net"
"net/http"
2023-08-23 15:27:21 +03:00
"net/url"
2023-01-30 16:27:06 +03:00
"os"
2023-12-31 13:55:22 +03:00
"path/filepath"
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
"reflect"
2023-01-30 16:27:06 +03:00
"runtime/debug"
2024-03-05 18:30:38 +03:00
"slices"
2023-01-30 16:27:06 +03:00
"sort"
"strings"
"sync"
"time"
_ "embed"
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
"golang.org/x/exp/maps"
2024-03-08 23:08:40 +03:00
"golang.org/x/text/unicode/norm"
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
"github.com/mjl-/adns"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/bstore"
"github.com/mjl-/sherpa"
"github.com/mjl-/sherpadoc"
"github.com/mjl-/sherpaprom"
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
"github.com/mjl-/mox/config"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/dkim"
"github.com/mjl-/mox/dmarc"
"github.com/mjl-/mox/dmarcdb"
"github.com/mjl-/mox/dmarcrpt"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/dnsbl"
"github.com/mjl-/mox/metrics"
"github.com/mjl-/mox/mlog"
mox "github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/moxvar"
"github.com/mjl-/mox/mtasts"
"github.com/mjl-/mox/mtastsdb"
2023-08-23 15:27:21 +03:00
"github.com/mjl-/mox/publicsuffix"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/queue"
"github.com/mjl-/mox/smtp"
"github.com/mjl-/mox/spf"
"github.com/mjl-/mox/store"
"github.com/mjl-/mox/tlsrpt"
"github.com/mjl-/mox/tlsrptdb"
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
"github.com/mjl-/mox/webauth"
2023-01-30 16:27:06 +03:00
)
2023-12-05 15:35:58 +03:00
var pkglog = mlog . New ( "webadmin" , nil )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
2023-12-31 13:55:22 +03:00
//go:embed api.json
2023-01-30 16:27:06 +03:00
var adminapiJSON [ ] byte
//go:embed admin.html
var adminHTML [ ] byte
2023-12-31 13:55:22 +03:00
//go:embed admin.js
var adminJS [ ] byte
var webadminFile = & mox . WebappFile {
HTML : adminHTML ,
JS : adminJS ,
HTMLPath : filepath . FromSlash ( "webadmin/admin.html" ) ,
JSPath : filepath . FromSlash ( "webadmin/admin.js" ) ,
}
2023-07-23 13:15:29 +03:00
var adminDoc = mustParseAPI ( "admin" , adminapiJSON )
2023-01-30 16:27:06 +03:00
2023-07-23 13:15:29 +03:00
func mustParseAPI ( api string , buf [ ] byte ) ( doc sherpadoc . Section ) {
2023-01-30 16:27:06 +03:00
err := json . Unmarshal ( buf , & doc )
if err != nil {
2023-12-05 15:35:58 +03:00
pkglog . Fatalx ( "parsing webadmin api docs" , err , slog . String ( "api" , api ) )
2023-01-30 16:27:06 +03:00
}
return doc
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
var sherpaHandlerOpts * sherpa . HandlerOpts
func makeSherpaHandler ( cookiePath string , isForwarded bool ) ( http . Handler , error ) {
return sherpa . NewHandler ( "/api/" , moxvar . Version , Admin { cookiePath , isForwarded } , & adminDoc , sherpaHandlerOpts )
}
2023-01-30 16:27:06 +03:00
func init ( ) {
collector , err := sherpaprom . NewCollector ( "moxadmin" , nil )
if err != nil {
2023-12-05 15:35:58 +03:00
pkglog . Fatalx ( "creating sherpa prometheus collector" , err )
2023-01-30 16:27:06 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
sherpaHandlerOpts = & sherpa . HandlerOpts { Collector : collector , AdjustFunctionNames : "none" , NoCORS : true }
// Just to validate.
_ , err = makeSherpaHandler ( "" , false )
2023-01-30 16:27:06 +03:00
if err != nil {
2023-12-05 15:35:58 +03:00
pkglog . Fatalx ( "sherpa handler" , err )
2023-01-30 16:27:06 +03:00
}
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
// Handler returns a handler for the webadmin endpoints, customized for the
// cookiePath.
func Handler ( cookiePath string , isForwarded bool ) func ( w http . ResponseWriter , r * http . Request ) {
sh , err := makeSherpaHandler ( cookiePath , isForwarded )
return func ( w http . ResponseWriter , r * http . Request ) {
if err != nil {
http . Error ( w , "500 - internal server error - cannot handle requests" , http . StatusInternalServerError )
return
}
handle ( sh , isForwarded , w , r )
}
}
2023-01-30 16:27:06 +03:00
// Admin exports web API functions for the admin web interface. All its methods are
2023-03-12 13:52:15 +03:00
// exported under api/. Function calls require valid HTTP Authentication
2023-01-30 16:27:06 +03:00
// credentials of a user.
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
type Admin struct {
cookiePath string // From listener, for setting authentication cookies.
isForwarded bool // From listener, whether we look at X-Forwarded-* headers.
2023-01-30 16:27:06 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
type ctxKey string
2023-01-30 16:27:06 +03:00
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
var requestInfoCtxKey ctxKey = "requestInfo"
2023-01-30 16:27:06 +03:00
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
type requestInfo struct {
SessionToken store . SessionToken
Response http . ResponseWriter
Request * http . Request // For Proto and TLS connection state during message submit.
2023-01-30 16:27:06 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
func handle ( apiHandler http . Handler , isForwarded bool , w http . ResponseWriter , r * http . Request ) {
2023-01-30 16:27:06 +03:00
ctx := context . WithValue ( r . Context ( ) , mlog . CidKey , mox . Cid ( ) )
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
log := pkglog . WithContext ( ctx ) . With ( slog . String ( "adminauth" , "" ) )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
// HTML/JS can be retrieved without authentication.
2023-12-31 13:55:22 +03:00
if r . URL . Path == "/" {
switch r . Method {
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
case "GET" , "HEAD" :
webadminFile . Serve ( ctx , log , w , r )
2023-12-31 13:55:22 +03:00
default :
http . Error ( w , "405 - method not allowed - use get" , http . StatusMethodNotAllowed )
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
}
return
}
isAPI := strings . HasPrefix ( r . URL . Path , "/api/" )
// Only allow POST for calls, they will not work cross-domain without CORS.
if isAPI && r . URL . Path != "/api/" && r . Method != "POST" {
http . Error ( w , "405 - method not allowed - use post" , http . StatusMethodNotAllowed )
return
}
// All other URLs, except the login endpoint require some authentication.
var sessionToken store . SessionToken
if r . URL . Path != "/api/LoginPrep" && r . URL . Path != "/api/Login" {
var ok bool
_ , sessionToken , _ , ok = webauth . Check ( ctx , log , webauth . Admin , "webadmin" , isForwarded , w , r , isAPI , isAPI , false )
if ! ok {
// Response has been written already.
2023-12-31 13:55:22 +03:00
return
2023-01-30 16:27:06 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
}
2023-12-31 13:55:22 +03:00
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
if isAPI {
reqInfo := requestInfo { sessionToken , w , r }
ctx = context . WithValue ( ctx , requestInfoCtxKey , reqInfo )
apiHandler . ServeHTTP ( w , r . WithContext ( ctx ) )
2023-01-30 16:27:06 +03:00
return
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
http . NotFound ( w , r )
2023-01-30 16:27:06 +03:00
}
2023-08-09 09:02:58 +03:00
func xcheckf ( ctx context . Context , err error , format string , args ... any ) {
if err == nil {
return
}
msg := fmt . Sprintf ( format , args ... )
errmsg := fmt . Sprintf ( "%s: %s" , msg , err )
2023-12-05 15:35:58 +03:00
pkglog . WithContext ( ctx ) . Errorx ( msg , err )
2023-12-15 17:47:54 +03:00
code := "server:error"
if errors . Is ( err , context . Canceled ) || errors . Is ( err , context . DeadlineExceeded ) {
code = "user:error"
}
panic ( & sherpa . Error { Code : code , Message : errmsg } )
2023-08-09 09:02:58 +03:00
}
func xcheckuserf ( ctx context . Context , err error , format string , args ... any ) {
if err == nil {
return
}
msg := fmt . Sprintf ( format , args ... )
errmsg := fmt . Sprintf ( "%s: %s" , msg , err )
2023-12-05 15:35:58 +03:00
pkglog . WithContext ( ctx ) . Errorx ( msg , err )
2023-08-09 09:02:58 +03:00
panic ( & sherpa . Error { Code : "user:error" , Message : errmsg } )
}
2024-03-05 18:30:38 +03:00
func xusererrorf ( ctx context . Context , format string , args ... any ) {
msg := fmt . Sprintf ( format , args ... )
pkglog . WithContext ( ctx ) . Error ( msg )
panic ( & sherpa . Error { Code : "user:error" , Message : msg } )
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
// LoginPrep returns a login token, and also sets it as cookie. Both must be
// present in the call to Login.
func ( w Admin ) LoginPrep ( ctx context . Context ) string {
log := pkglog . WithContext ( ctx )
reqInfo := ctx . Value ( requestInfoCtxKey ) . ( requestInfo )
var data [ 8 ] byte
_ , err := cryptorand . Read ( data [ : ] )
xcheckf ( ctx , err , "generate token" )
loginToken := base64 . RawURLEncoding . EncodeToString ( data [ : ] )
webauth . LoginPrep ( ctx , log , "webadmin" , w . cookiePath , w . isForwarded , reqInfo . Response , reqInfo . Request , loginToken )
return loginToken
}
// Login returns a session token for the credentials, or fails with error code
// "user:badLogin". Call LoginPrep to get a loginToken.
func ( w Admin ) Login ( ctx context . Context , loginToken , password string ) store . CSRFToken {
log := pkglog . WithContext ( ctx )
reqInfo := ctx . Value ( requestInfoCtxKey ) . ( requestInfo )
csrfToken , err := webauth . Login ( ctx , log , webauth . Admin , "webadmin" , w . cookiePath , w . isForwarded , reqInfo . Response , reqInfo . Request , loginToken , "" , password )
if _ , ok := err . ( * sherpa . Error ) ; ok {
panic ( err )
}
xcheckf ( ctx , err , "login" )
return csrfToken
}
// Logout invalidates the session token.
func ( w Admin ) Logout ( ctx context . Context ) {
log := pkglog . WithContext ( ctx )
reqInfo := ctx . Value ( requestInfoCtxKey ) . ( requestInfo )
err := webauth . Logout ( ctx , log , webauth . Admin , "webadmin" , w . cookiePath , w . isForwarded , reqInfo . Response , reqInfo . Request , "" , reqInfo . SessionToken )
xcheckf ( ctx , err , "logout" )
}
2023-01-30 16:27:06 +03:00
type Result struct {
Errors [ ] string
Warnings [ ] string
Instructions [ ] string
}
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
type DNSSECResult struct {
2023-01-30 16:27:06 +03:00
Result
}
2023-02-03 17:54:34 +03:00
type IPRevCheckResult struct {
Hostname dns . Domain // This hostname, IPs must resolve back to this.
IPNames map [ string ] [ ] string // IP to names.
Result
}
2023-01-30 16:27:06 +03:00
type MX struct {
Host string
Pref int
IPs [ ] string
}
type MXCheckResult struct {
Records [ ] MX
Result
}
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
type TLSCheckResult struct {
Result
}
type DANECheckResult struct {
Result
}
2023-01-30 16:27:06 +03:00
type SPFRecord struct {
spf . Record
}
type SPFCheckResult struct {
DomainTXT string
DomainRecord * SPFRecord
HostTXT string
HostRecord * SPFRecord
Result
}
type DKIMCheckResult struct {
Records [ ] DKIMRecord
Result
}
type DKIMRecord struct {
Selector string
TXT string
Record * dkim . Record
}
type DMARCRecord struct {
dmarc . Record
}
type DMARCCheckResult struct {
Domain string
TXT string
Record * DMARCRecord
Result
}
type TLSRPTRecord struct {
tlsrpt . Record
}
type TLSRPTCheckResult struct {
TXT string
Record * TLSRPTRecord
Result
}
type MTASTSRecord struct {
mtasts . Record
}
type MTASTSCheckResult struct {
TXT string
Record * MTASTSRecord
PolicyText string
Policy * mtasts . Policy
Result
}
type SRVConfCheckResult struct {
2023-12-31 13:55:22 +03:00
SRVs map [ string ] [ ] net . SRV // Service (e.g. "_imaps") to records.
2023-01-30 16:27:06 +03:00
Result
}
type AutoconfCheckResult struct {
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
ClientSettingsDomainIPs [ ] string
IPs [ ] string
2023-01-30 16:27:06 +03:00
Result
}
type AutodiscoverSRV struct {
net . SRV
IPs [ ] string
}
type AutodiscoverCheckResult struct {
Records [ ] AutodiscoverSRV
Result
}
// CheckResult is the analysis of a domain, its actual configuration (DNS, TLS,
// connectivity) and the mox configuration. It includes configuration instructions
// (e.g. DNS records), and warnings and errors encountered.
type CheckResult struct {
Domain string
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
DNSSEC DNSSECResult
2023-02-03 17:54:34 +03:00
IPRev IPRevCheckResult
2023-01-30 16:27:06 +03:00
MX MXCheckResult
TLS TLSCheckResult
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
DANE DANECheckResult
2023-01-30 16:27:06 +03:00
SPF SPFCheckResult
DKIM DKIMCheckResult
DMARC DMARCCheckResult
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
HostTLSRPT TLSRPTCheckResult
DomainTLSRPT TLSRPTCheckResult
2023-01-30 16:27:06 +03:00
MTASTS MTASTSCheckResult
SRVConf SRVConfCheckResult
Autoconf AutoconfCheckResult
Autodiscover AutodiscoverCheckResult
}
// logPanic can be called with a defer from a goroutine to prevent the entire program from being shutdown in case of a panic.
func logPanic ( ctx context . Context ) {
x := recover ( )
if x == nil {
return
}
2023-12-05 15:35:58 +03:00
pkglog . WithContext ( ctx ) . Error ( "recover from panic" , slog . Any ( "panic" , x ) )
2023-01-30 16:27:06 +03:00
debug . PrintStack ( )
2023-09-15 17:47:17 +03:00
metrics . PanicInc ( metrics . Webadmin )
2023-01-30 16:27:06 +03:00
}
// return IPs we may be listening on.
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
func xlistenIPs ( ctx context . Context , receiveOnly bool ) [ ] net . IP {
ips , err := mox . IPs ( ctx , receiveOnly )
xcheckf ( ctx , err , "listing ips" )
return ips
}
// return IPs from which we may be sending.
func xsendingIPs ( ctx context . Context ) [ ] net . IP {
ips , err := mox . IPs ( ctx , false )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "listing ips" )
return ips
}
// CheckDomain checks the configuration for the domain, such as MX, SMTP STARTTLS,
// SPF, DKIM, DMARC, TLSRPT, MTASTS, autoconfig, autodiscover.
func ( Admin ) CheckDomain ( ctx context . Context , domainName string ) ( r CheckResult ) {
// todo future: should run these checks without a DNS cache so recent changes are picked up.
2023-12-05 15:35:58 +03:00
resolver := dns . StrictResolver { Pkg : "check" , Log : pkglog . WithContext ( ctx ) . Logger }
2023-02-02 14:58:33 +03:00
dialer := & net . Dialer { Timeout : 10 * time . Second }
nctx , cancel := context . WithTimeout ( ctx , 30 * time . Second )
defer cancel ( )
return checkDomain ( nctx , resolver , dialer , domainName )
2023-01-30 16:27:06 +03:00
}
2023-12-31 13:55:22 +03:00
func unptr [ T any ] ( l [ ] * T ) [ ] T {
if l == nil {
return nil
}
r := make ( [ ] T , len ( l ) )
for i , e := range l {
r [ i ] = * e
}
return r
}
2023-01-30 16:27:06 +03:00
func checkDomain ( ctx context . Context , resolver dns . Resolver , dialer * net . Dialer , domainName string ) ( r CheckResult ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-02-03 17:54:34 +03:00
domain , err := dns . ParseDomain ( domainName )
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , err , "parsing domain" )
2023-01-30 16:27:06 +03:00
2023-02-03 17:54:34 +03:00
domConf , ok := mox . Conf . Domain ( domain )
2023-01-30 16:27:06 +03:00
if ! ok {
panic ( & sherpa . Error { Code : "user:notFound" , Message : "domain not found" } )
}
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
listenIPs := xlistenIPs ( ctx , true )
2023-01-30 16:27:06 +03:00
isListenIP := func ( ip net . IP ) bool {
for _ , lip := range listenIPs {
if ip . Equal ( lip ) {
return true
}
}
return false
}
addf := func ( l * [ ] string , format string , args ... any ) {
* l = append ( * l , fmt . Sprintf ( format , args ... ) )
}
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
// Host must be an absolute dns name, ending with a dot.
2023-01-30 16:27:06 +03:00
lookupIPs := func ( errors * [ ] string , host string ) ( ips [ ] string , ourIPs , notOurIPs [ ] net . IP , rerr error ) {
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
addrs , _ , err := resolver . LookupHost ( ctx , host )
2023-01-30 16:27:06 +03:00
if err != nil {
addf ( errors , "Looking up %q: %s" , host , err )
return nil , nil , nil , err
}
for _ , addr := range addrs {
ip := net . ParseIP ( addr )
if ip == nil {
addf ( errors , "Bad IP %q" , addr )
continue
}
ips = append ( ips , ip . String ( ) )
if isListenIP ( ip ) {
ourIPs = append ( ourIPs , ip )
} else {
notOurIPs = append ( notOurIPs , ip )
}
}
return ips , ourIPs , notOurIPs , nil
}
checkTLS := func ( errors * [ ] string , host string , ips [ ] string , port string ) {
d := tls . Dialer {
NetDialer : dialer ,
Config : & tls . Config {
ServerName : host ,
MinVersion : tls . VersionTLS12 , // ../rfc/8996:31 ../rfc/8997:66
2023-03-10 18:25:18 +03:00
RootCAs : mox . Conf . Static . TLS . CertPool ,
2023-01-30 16:27:06 +03:00
} ,
}
for _ , ip := range ips {
conn , err := d . DialContext ( ctx , "tcp" , net . JoinHostPort ( ip , port ) )
if err != nil {
addf ( errors , "TLS connection to hostname %q, IP %q: %s" , host , ip , err )
} else {
conn . Close ( )
}
}
}
2023-08-11 11:13:17 +03:00
// If at least one listener with SMTP enabled has unspecified NATed IPs, we'll skip
2023-03-09 17:24:06 +03:00
// some checks related to these IPs.
2023-08-11 11:13:17 +03:00
var isNAT , isUnspecifiedNAT bool
2023-03-09 17:24:06 +03:00
for _ , l := range mox . Conf . Static . Listeners {
2023-08-11 11:13:17 +03:00
if ! l . SMTP . Enabled {
continue
}
if l . IPsNATed {
isUnspecifiedNAT = true
isNAT = true
}
if len ( l . NATIPs ) > 0 {
2023-03-09 17:24:06 +03:00
isNAT = true
}
}
2023-01-30 16:27:06 +03:00
var wg sync . WaitGroup
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
// DNSSEC
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
2024-03-07 12:34:13 +03:00
// Some DNSSEC-verifying resolvers return unauthentic data for ".", so we check "com".
_ , result , err := resolver . LookupNS ( ctx , "com." )
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
if err != nil {
addf ( & r . DNSSEC . Errors , "Looking up NS for DNS root (.) to check support in resolver for DNSSEC-verification: %s" , err )
} else if ! result . Authentic {
2024-03-07 12:47:48 +03:00
addf ( & r . DNSSEC . Warnings , ` It looks like the DNS resolvers configured on your system do not verify DNSSEC, or aren't trusted (by having loopback IPs or through "options trust-ad" in /etc/resolv.conf). Without DNSSEC, outbound delivery with SMTP uses unprotected MX records, and SMTP STARTTLS connections cannot verify the TLS certificate with DANE (based on public keys in DNS), and will fall back to either MTA-STS for verification, or use "opportunistic TLS" with no certificate verification. ` )
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
} else {
_ , result , _ := resolver . LookupMX ( ctx , domain . ASCII + "." )
if ! result . Authentic {
addf ( & r . DNSSEC . Warnings , ` DNS records for this domain (zone) are not DNSSEC-signed. Mail servers sending email to your domain, or receiving email from your domain, cannot verify that the MX/SPF/DKIM/DMARC/MTA-STS records they see are authentic. ` )
}
}
addf ( & r . DNSSEC . Instructions , ` Enable DNSSEC-signing of the DNS records of your domain (zone) at your DNS hosting provider. ` )
2024-03-07 12:47:48 +03:00
addf ( & r . DNSSEC . Instructions , ` If your DNS records are already DNSSEC - signed , you may not have a DNSSEC - verifying recursive resolver configured . Install unbound , ensure it has DNSSEC root keys ( see unbound - anchor ) , and enable support for "extended dns errors" ( EDE , available since unbound v1 .16 .0 ) . Test with "dig com. ns" and look for "ad" ( authentic data ) in response "flags" .
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
cat << EOF > / etc / unbound / unbound . conf . d / ede . conf
server :
ede : yes
val - log - level : 2
EOF
` )
} ( )
2023-02-03 17:54:34 +03:00
// IPRev
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
2023-08-11 11:13:17 +03:00
// For each mox.Conf.SpecifiedSMTPListenIPs and all NATIPs, and each IP for
2023-02-03 17:54:34 +03:00
// mox.Conf.HostnameDomain, check if they resolve back to the host name.
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
hostIPs := map [ dns . Domain ] [ ] net . IP { }
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
ips , _ , err := resolver . LookupIP ( ctx , "ip" , mox . Conf . Static . HostnameDomain . ASCII + "." )
2023-02-03 17:54:34 +03:00
if err != nil {
addf ( & r . IPRev . Errors , "Looking up IPs for hostname: %s" , err )
}
2023-08-11 11:13:17 +03:00
gatherMoreIPs := func ( publicIPs [ ] net . IP ) {
2023-03-09 17:24:06 +03:00
nextip :
2023-08-11 11:13:17 +03:00
for _ , ip := range publicIPs {
2023-03-09 17:24:06 +03:00
for _ , xip := range ips {
if ip . Equal ( xip ) {
continue nextip
}
2023-02-03 17:54:34 +03:00
}
2023-03-09 17:24:06 +03:00
ips = append ( ips , ip )
2023-02-03 17:54:34 +03:00
}
}
2023-08-11 11:13:17 +03:00
if ! isNAT {
gatherMoreIPs ( mox . Conf . Static . SpecifiedSMTPListenIPs )
}
for _ , l := range mox . Conf . Static . Listeners {
if ! l . SMTP . Enabled {
continue
}
var natips [ ] net . IP
for _ , ip := range l . NATIPs {
natips = append ( natips , net . ParseIP ( ip ) )
}
gatherMoreIPs ( natips )
}
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
hostIPs [ mox . Conf . Static . HostnameDomain ] = ips
iplist := func ( ips [ ] net . IP ) string {
var ipstrs [ ] string
for _ , ip := range ips {
ipstrs = append ( ipstrs , ip . String ( ) )
}
return strings . Join ( ipstrs , ", " )
}
r . IPRev . Hostname = mox . Conf . Static . HostnameDomain
r . IPRev . Instructions = [ ] string {
fmt . Sprintf ( "Ensure IPs %s have reverse address %s." , iplist ( ips ) , mox . Conf . Static . HostnameDomain . ASCII ) ,
}
// If we have a socks transport, also check its host and IP.
for tname , t := range mox . Conf . Static . Transports {
if t . Socks != nil {
hostIPs [ t . Socks . Hostname ] = append ( hostIPs [ t . Socks . Hostname ] , t . Socks . IPs ... )
instr := fmt . Sprintf ( "For SOCKS transport %s, ensure IPs %s have reverse address %s." , tname , iplist ( t . Socks . IPs ) , t . Socks . Hostname )
r . IPRev . Instructions = append ( r . IPRev . Instructions , instr )
}
}
2023-02-03 17:54:34 +03:00
type result struct {
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
Host dns . Domain
2023-02-03 17:54:34 +03:00
IP string
Addrs [ ] string
Err error
}
results := make ( chan result )
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
n := 0
for host , ips := range hostIPs {
for _ , ip := range ips {
n ++
s := ip . String ( )
host := host
go func ( ) {
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
addrs , _ , err := resolver . LookupAddr ( ctx , s )
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
results <- result { host , s , addrs , err }
} ( )
}
2023-02-03 17:54:34 +03:00
}
r . IPRev . IPNames = map [ string ] [ ] string { }
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
for i := 0 ; i < n ; i ++ {
2023-02-03 17:54:34 +03:00
lr := <- results
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
host , addrs , ip , err := lr . Host , lr . Addrs , lr . IP , lr . Err
2023-02-03 17:54:34 +03:00
if err != nil {
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
addf ( & r . IPRev . Errors , "Looking up reverse name for %s of %s: %v" , ip , host , err )
2023-02-03 17:54:34 +03:00
continue
}
if len ( addrs ) != 1 {
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
addf ( & r . IPRev . Errors , "Expected exactly 1 name for %s of %s, got %d (%v)" , ip , host , len ( addrs ) , addrs )
2023-02-03 17:54:34 +03:00
}
var match bool
for i , a := range addrs {
a = strings . TrimRight ( a , "." )
addrs [ i ] = a
ad , err := dns . ParseDomain ( a )
if err != nil {
addf ( & r . IPRev . Errors , "Parsing reverse name %q for %s: %v" , a , ip , err )
}
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
if ad == host {
2023-02-03 17:54:34 +03:00
match = true
}
}
if ! match {
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
addf ( & r . IPRev . Errors , "Reverse name(s) %s for ip %s do not match hostname %s, which will cause other mail servers to reject incoming messages from this IP." , strings . Join ( addrs , "," ) , ip , host )
2023-02-03 17:54:34 +03:00
}
r . IPRev . IPNames [ ip ] = addrs
}
2023-07-24 10:23:41 +03:00
// Linux machines are often initially set up with a loopback IP for the hostname in
// /etc/hosts, presumably because it isn't known if their external IPs are static.
// For mail servers, they should certainly be static. The quickstart would also
// have warned about this, but could have been missed/ignored.
for _ , ip := range ips {
if ip . IsLoopback ( ) {
addf ( & r . IPRev . Errors , "Hostname %s resolves to loopback IP %s, this will likely prevent email delivery to local accounts from working. The loopback IP was probably configured in /etc/hosts at system installation time. Replace the loopback IP with your actual external IPs in /etc/hosts." , mox . Conf . Static . HostnameDomain , ip . String ( ) )
}
}
2023-02-03 17:54:34 +03:00
} ( )
2023-01-30 16:27:06 +03:00
// MX
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
mxs , _ , err := resolver . LookupMX ( ctx , domain . ASCII + "." )
2023-01-30 16:27:06 +03:00
if err != nil {
2023-03-09 22:18:34 +03:00
addf ( & r . MX . Errors , "Looking up MX records for %s: %s" , domain , err )
2023-01-30 16:27:06 +03:00
}
r . MX . Records = make ( [ ] MX , len ( mxs ) )
for i , mx := range mxs {
r . MX . Records [ i ] = MX { mx . Host , int ( mx . Pref ) , nil }
}
if len ( mxs ) == 1 && mxs [ 0 ] . Host == "." {
addf ( & r . MX . Errors , ` MX records consists of explicit null mx record (".") indicating that domain does not accept email. ` )
return
}
for i , mx := range mxs {
ips , ourIPs , notOurIPs , err := lookupIPs ( & r . MX . Errors , mx . Host )
if err != nil {
2023-03-09 17:24:06 +03:00
addf ( & r . MX . Errors , "Looking up IPs for mx host %q: %s" , mx . Host , err )
2023-01-30 16:27:06 +03:00
}
r . MX . Records [ i ] . IPs = ips
2023-08-11 11:13:17 +03:00
if isUnspecifiedNAT {
2023-03-09 17:24:06 +03:00
continue
}
2023-01-30 16:27:06 +03:00
if len ( ourIPs ) == 0 {
addf ( & r . MX . Errors , "None of the IPs that mx %q points to is ours: %v" , mx . Host , notOurIPs )
} else if len ( notOurIPs ) > 0 {
addf ( & r . MX . Errors , "Some of the IPs that mx %q points to are not ours: %v" , mx . Host , notOurIPs )
}
}
r . MX . Instructions = [ ] string {
2023-02-03 17:54:34 +03:00
fmt . Sprintf ( "Ensure a DNS MX record like the following exists:\n\n\t%s MX 10 %s\n\nWithout the trailing dot, the name would be interpreted as relative to the domain." , domain . ASCII + "." , mox . Conf . Static . HostnameDomain . ASCII + "." ) ,
2023-01-30 16:27:06 +03:00
}
} ( )
// TLS, mostly checking certificate expiration and CA trust.
// todo: should add checks about the listeners (which aren't specific to domains) somewhere else, not on the domain page with this checkDomain call. i.e. submissions, imap starttls, imaps.
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
// MTA-STS, autoconfig, autodiscover are checked in their sections.
// Dial a single MX host with given IP and perform STARTTLS handshake.
dialSMTPSTARTTLS := func ( host , ip string ) error {
conn , err := dialer . DialContext ( ctx , "tcp" , net . JoinHostPort ( ip , "25" ) )
if err != nil {
return err
}
defer func ( ) {
if conn != nil {
conn . Close ( )
}
} ( )
end := time . Now ( ) . Add ( 10 * time . Second )
cctx , cancel := context . WithTimeout ( ctx , 10 * time . Second )
defer cancel ( )
2023-02-16 15:22:00 +03:00
err = conn . SetDeadline ( end )
2023-12-05 15:35:58 +03:00
log . WithContext ( ctx ) . Check ( err , "setting deadline" )
2023-01-30 16:27:06 +03:00
br := bufio . NewReader ( conn )
_ , err = br . ReadString ( '\n' )
if err != nil {
return fmt . Errorf ( "reading SMTP banner from remote: %s" , err )
}
if _ , err := fmt . Fprintf ( conn , "EHLO moxtest\r\n" ) ; err != nil {
return fmt . Errorf ( "writing SMTP EHLO to remote: %s" , err )
}
for {
line , err := br . ReadString ( '\n' )
if err != nil {
return fmt . Errorf ( "reading SMTP EHLO response from remote: %s" , err )
}
if strings . HasPrefix ( line , "250-" ) {
continue
}
if strings . HasPrefix ( line , "250 " ) {
break
}
return fmt . Errorf ( "unexpected response to SMTP EHLO from remote: %q" , strings . TrimSuffix ( line , "\r\n" ) )
}
if _ , err := fmt . Fprintf ( conn , "STARTTLS\r\n" ) ; err != nil {
return fmt . Errorf ( "writing SMTP STARTTLS to remote: %s" , err )
}
line , err := br . ReadString ( '\n' )
if err != nil {
return fmt . Errorf ( "reading response to SMTP STARTTLS from remote: %s" , err )
}
if ! strings . HasPrefix ( line , "220 " ) {
return fmt . Errorf ( "SMTP STARTTLS response from remote not 220 OK: %q" , strings . TrimSuffix ( line , "\r\n" ) )
}
2023-03-10 18:25:18 +03:00
config := & tls . Config {
ServerName : host ,
RootCAs : mox . Conf . Static . TLS . CertPool ,
}
tlsconn := tls . Client ( conn , config )
2023-01-30 16:27:06 +03:00
if err := tlsconn . HandshakeContext ( cctx ) ; err != nil {
return fmt . Errorf ( "TLS handshake after SMTP STARTTLS: %s" , err )
}
cancel ( )
conn . Close ( )
conn = nil
return nil
}
checkSMTPSTARTTLS := func ( ) {
// Initial errors are ignored, will already have been warned about by MX checks.
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
mxs , _ , err := resolver . LookupMX ( ctx , domain . ASCII + "." )
2023-01-30 16:27:06 +03:00
if err != nil {
return
}
if len ( mxs ) == 1 && mxs [ 0 ] . Host == "." {
return
}
for _ , mx := range mxs {
ips , _ , _ , err := lookupIPs ( & r . MX . Errors , mx . Host )
if err != nil {
continue
}
for _ , ip := range ips {
if err := dialSMTPSTARTTLS ( mx . Host , ip ) ; err != nil {
addf ( & r . TLS . Errors , "SMTP connection with STARTTLS to MX hostname %q IP %s: %s" , mx . Host , ip , err )
}
}
}
}
checkSMTPSTARTTLS ( )
} ( )
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
// DANE
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
daneRecords := func ( l config . Listener ) map [ string ] struct { } {
if l . TLS == nil {
return nil
}
records := map [ string ] struct { } { }
addRecord := func ( privKey crypto . Signer ) {
spkiBuf , err := x509 . MarshalPKIXPublicKey ( privKey . Public ( ) )
if err != nil {
addf ( & r . DANE . Errors , "marshal SubjectPublicKeyInfo for DANE record: %v" , err )
return
}
sum := sha256 . Sum256 ( spkiBuf )
r := adns . TLSA {
Usage : adns . TLSAUsageDANEEE ,
Selector : adns . TLSASelectorSPKI ,
MatchType : adns . TLSAMatchTypeSHA256 ,
CertAssoc : sum [ : ] ,
}
records [ r . Record ( ) ] = struct { } { }
}
for _ , privKey := range l . TLS . HostPrivateRSA2048Keys {
addRecord ( privKey )
}
for _ , privKey := range l . TLS . HostPrivateECDSAP256Keys {
addRecord ( privKey )
}
return records
}
expectedDANERecords := func ( host string ) map [ string ] struct { } {
for _ , l := range mox . Conf . Static . Listeners {
if l . HostnameDomain . ASCII == host {
return daneRecords ( l )
}
}
public := mox . Conf . Static . Listeners [ "public" ]
if mox . Conf . Static . HostnameDomain . ASCII == host && public . HostnameDomain . ASCII == "" {
return daneRecords ( public )
}
return nil
}
mxl , result , err := resolver . LookupMX ( ctx , domain . ASCII + "." )
if err != nil {
addf ( & r . DANE . Errors , "Looking up MX hosts to check for DANE records: %s" , err )
} else {
if ! result . Authentic {
addf ( & r . DANE . Warnings , "DANE is inactive because MX records are not DNSSEC-signed." )
}
for _ , mx := range mxl {
expect := expectedDANERecords ( mx . Host )
tlsal , tlsaResult , err := resolver . LookupTLSA ( ctx , 25 , "tcp" , mx . Host + "." )
if dns . IsNotFound ( err ) {
if len ( expect ) > 0 {
addf ( & r . DANE . Errors , "No DANE records for MX host %s, expected: %s." , mx . Host , strings . Join ( maps . Keys ( expect ) , "; " ) )
}
continue
} else if err != nil {
addf ( & r . DANE . Errors , "Looking up DANE records for MX host %s: %v" , mx . Host , err )
continue
} else if ! tlsaResult . Authentic && len ( tlsal ) > 0 {
addf ( & r . DANE . Errors , "DANE records exist for MX host %s, but are not DNSSEC-signed." , mx . Host )
}
extra := map [ string ] struct { } { }
for _ , e := range tlsal {
s := e . Record ( )
if _ , ok := expect [ s ] ; ok {
delete ( expect , s )
} else {
extra [ s ] = struct { } { }
}
}
if len ( expect ) > 0 {
l := maps . Keys ( expect )
sort . Strings ( l )
addf ( & r . DANE . Errors , "Missing DANE records of type TLSA for MX host _25._tcp.%s: %s" , mx . Host , strings . Join ( l , "; " ) )
}
if len ( extra ) > 0 {
l := maps . Keys ( extra )
sort . Strings ( l )
addf ( & r . DANE . Errors , "Unexpected DANE records of type TLSA for MX host _25._tcp.%s: %s" , mx . Host , strings . Join ( l , "; " ) )
}
}
}
public := mox . Conf . Static . Listeners [ "public" ]
pubDom := public . HostnameDomain
if pubDom . ASCII == "" {
pubDom = mox . Conf . Static . HostnameDomain
}
records := maps . Keys ( daneRecords ( public ) )
sort . Strings ( records )
if len ( records ) > 0 {
instr := "Ensure the DNS records below exist. These records are for the whole machine, not per domain, so create them only once. Make sure DNSSEC is enabled, otherwise the records have no effect. The records indicate that a remote mail server trying to deliver email with SMTP (TCP port 25) must verify the TLS certificate with DANE-EE (3), based on the certificate public key (\"SPKI\", 1) that is SHA2-256-hashed (1) to the hexadecimal hash. DANE-EE verification means only the certificate or public key is verified, not whether the certificate is signed by a (centralized) certificate authority (CA), is expired, or matches the host name.\n\n"
for _ , r := range records {
instr += fmt . Sprintf ( "\t_25._tcp.%s. TLSA %s\n" , pubDom . ASCII , r )
}
addf ( & r . DANE . Instructions , instr )
}
} ( )
2023-01-30 16:27:06 +03:00
// SPF
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
// todo: add warnings if we have Transports with submission? admin should ensure their IPs are in the SPF record. it may be an IP(net), or an include. that means we cannot easily check for it. and should we first check the transport can be used from this domain (or an account that has this domain?). also see DKIM.
2023-01-30 16:27:06 +03:00
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
// Verify a domain with the configured IPs that do SMTP.
verifySPF := func ( kind string , domain dns . Domain ) ( string , * SPFRecord , spf . Record ) {
2023-12-05 15:35:58 +03:00
_ , txt , record , _ , err := spf . Lookup ( ctx , log . Logger , resolver , domain )
2023-01-30 16:27:06 +03:00
if err != nil {
addf ( & r . SPF . Errors , "Looking up %s SPF record: %s" , kind , err )
}
var xrecord * SPFRecord
if record != nil {
xrecord = & SPFRecord { * record }
}
spfr := spf . Record {
Version : "spf1" ,
}
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
checkSPFIP := func ( ip net . IP ) {
mechanism := "ip4"
if ip . To4 ( ) == nil {
mechanism = "ip6"
}
spfr . Directives = append ( spfr . Directives , spf . Directive { Mechanism : mechanism , IP : ip } )
if record == nil {
return
}
args := spf . Args {
RemoteIP : ip ,
MailFromLocalpart : "postmaster" ,
MailFromDomain : domain ,
HelloDomain : dns . IPDomain { Domain : domain } ,
LocalIP : net . ParseIP ( "127.0.0.1" ) ,
LocalHostname : dns . Domain { ASCII : "localhost" } ,
}
2023-12-05 15:35:58 +03:00
status , mechanism , expl , _ , err := spf . Evaluate ( ctx , log . Logger , record , resolver , args )
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
if err != nil {
addf ( & r . SPF . Errors , "Evaluating IP %q against %s SPF record: %s" , ip , kind , err )
} else if status != spf . StatusPass {
addf ( & r . SPF . Errors , "IP %q does not pass %s SPF evaluation, status not \"pass\" but %q (mechanism %q, explanation %q)" , ip , kind , status , mechanism , expl )
}
}
2023-01-30 16:27:06 +03:00
for _ , l := range mox . Conf . Static . Listeners {
2023-03-09 17:24:06 +03:00
if ! l . SMTP . Enabled || l . IPsNATed {
2023-01-30 16:27:06 +03:00
continue
}
2023-08-11 11:13:17 +03:00
ips := l . IPs
if len ( l . NATIPs ) > 0 {
ips = l . NATIPs
}
for _ , ipstr := range ips {
2023-01-30 16:27:06 +03:00
ip := net . ParseIP ( ipstr )
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
checkSPFIP ( ip )
}
}
for _ , t := range mox . Conf . Static . Transports {
if t . Socks != nil {
for _ , ip := range t . Socks . IPs {
checkSPFIP ( ip )
2023-01-30 16:27:06 +03:00
}
}
}
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
2023-01-30 16:27:06 +03:00
spfr . Directives = append ( spfr . Directives , spf . Directive { Qualifier : "-" , Mechanism : "all" } )
return txt , xrecord , spfr
}
// Check SPF record for domain.
var dspfr spf . Record
2023-02-03 17:54:34 +03:00
r . SPF . DomainTXT , r . SPF . DomainRecord , dspfr = verifySPF ( "domain" , domain )
2023-01-30 16:27:06 +03:00
// todo: possibly check all hosts for MX records? assuming they are also sending mail servers.
r . SPF . HostTXT , r . SPF . HostRecord , _ = verifySPF ( "host" , mox . Conf . Static . HostnameDomain )
dtxt , err := dspfr . Record ( )
if err != nil {
addf ( & r . SPF . Errors , "Making SPF record for instructions: %s" , err )
}
2023-10-13 09:16:46 +03:00
domainspf := fmt . Sprintf ( "%s TXT %s" , domain . ASCII + "." , mox . TXTStrings ( dtxt ) )
2023-01-30 16:27:06 +03:00
// Check SPF record for sending host. ../rfc/7208:2263 ../rfc/7208:2287
2023-10-13 09:16:46 +03:00
hostspf := fmt . Sprintf ( ` %s TXT "v=spf1 a -all" ` , mox . Conf . Static . HostnameDomain . ASCII + "." )
2023-01-30 16:27:06 +03:00
addf ( & r . SPF . Instructions , "Ensure DNS TXT records like the following exists:\n\n\t%s\n\t%s\n\nIf you have an existing mail setup, with other hosts also sending mail for you domain, you should add those IPs as well. You could replace \"-all\" with \"~all\" to treat mail sent from unlisted IPs as \"softfail\", or with \"?all\" for \"neutral\"." , domainspf , hostspf )
} ( )
// DKIM
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
// todo: add warnings if we have Transports with submission? admin should ensure DKIM records exist. we cannot easily check if they actually exist though. and should we first check the transport can be used from this domain (or an account that has this domain?). also see SPF.
2023-01-30 16:27:06 +03:00
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
var missing [ ] string
var haveEd25519 bool
2023-02-03 17:54:34 +03:00
for sel , selc := range domConf . DKIM . Selectors {
2023-01-30 16:27:06 +03:00
if _ , ok := selc . Key . ( ed25519 . PrivateKey ) ; ok {
haveEd25519 = true
}
2023-12-05 15:35:58 +03:00
_ , record , txt , _ , err := dkim . Lookup ( ctx , log . Logger , resolver , selc . Domain , domain )
2023-01-30 16:27:06 +03:00
if err != nil {
missing = append ( missing , sel )
if errors . Is ( err , dkim . ErrNoRecord ) {
addf ( & r . DKIM . Errors , "No DKIM DNS record for selector %q." , sel )
} else if errors . Is ( err , dkim . ErrSyntax ) {
addf ( & r . DKIM . Errors , "Parsing DKIM DNS record for selector %q: %s" , sel , err )
} else {
addf ( & r . DKIM . Errors , "Fetching DKIM record for selector %q: %s" , sel , err )
}
}
if txt != "" {
r . DKIM . Records = append ( r . DKIM . Records , DKIMRecord { sel , txt , record } )
pubKey := selc . Key . Public ( )
var pk [ ] byte
switch k := pubKey . ( type ) {
case * rsa . PublicKey :
var err error
pk , err = x509 . MarshalPKIXPublicKey ( k )
if err != nil {
addf ( & r . DKIM . Errors , "Marshal public key for %q to compare against DNS: %s" , sel , err )
continue
}
case ed25519 . PublicKey :
pk = [ ] byte ( k )
default :
addf ( & r . DKIM . Errors , "Internal error: unknown public key type %T." , pubKey )
continue
}
if record != nil && ! bytes . Equal ( record . Pubkey , pk ) {
addf ( & r . DKIM . Errors , "For selector %q, the public key in DKIM DNS TXT record does not match with configured private key." , sel )
missing = append ( missing , sel )
}
}
}
2023-02-03 17:54:34 +03:00
if len ( domConf . DKIM . Selectors ) == 0 {
2023-01-30 16:27:06 +03:00
addf ( & r . DKIM . Errors , "No DKIM configuration, add a key to the configuration file, and instructions for DNS records will appear here." )
} else if ! haveEd25519 {
addf ( & r . DKIM . Warnings , "Consider adding an ed25519 key: the keys are smaller, the cryptography faster and more modern." )
}
instr := ""
for _ , sel := range missing {
dkimr := dkim . Record {
Version : "DKIM1" ,
Hashes : [ ] string { "sha256" } ,
2023-02-03 17:54:34 +03:00
PublicKey : domConf . DKIM . Selectors [ sel ] . Key . Public ( ) ,
2023-01-30 16:27:06 +03:00
}
switch dkimr . PublicKey . ( type ) {
case * rsa . PublicKey :
case ed25519 . PublicKey :
dkimr . Key = "ed25519"
default :
addf ( & r . DKIM . Errors , "Internal error: unknown public key type %T." , dkimr . PublicKey )
}
txt , err := dkimr . Record ( )
if err != nil {
addf ( & r . DKIM . Errors , "Making DKIM record for instructions: %s" , err )
continue
}
2023-10-13 09:16:46 +03:00
instr += fmt . Sprintf ( "\n\t%s._domainkey TXT %s\n" , sel , mox . TXTStrings ( txt ) )
2023-01-30 16:27:06 +03:00
}
if instr != "" {
instr = "Ensure the following DNS record(s) exists, so mail servers receiving emails from this domain can verify the signatures in the mail headers:\n" + instr
addf ( & r . DKIM . Instructions , "%s" , instr )
}
} ( )
// DMARC
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
2023-12-05 15:35:58 +03:00
_ , dmarcDomain , record , txt , _ , err := dmarc . Lookup ( ctx , log . Logger , resolver , domain )
2023-01-30 16:27:06 +03:00
if err != nil {
addf ( & r . DMARC . Errors , "Looking up DMARC record: %s" , err )
} else if record == nil {
addf ( & r . DMARC . Errors , "No DMARC record" )
}
r . DMARC . Domain = dmarcDomain . Name ( )
r . DMARC . TXT = txt
if record != nil {
r . DMARC . Record = & DMARCRecord { * record }
}
if record != nil && record . Policy == "none" {
addf ( & r . DMARC . Warnings , "DMARC policy is in test mode (p=none), do not forget to change to p=reject or p=quarantine after test period has been completed." )
}
if record != nil && record . SubdomainPolicy == "none" {
addf ( & r . DMARC . Warnings , "DMARC subdomain policy is in test mode (sp=none), do not forget to change to sp=reject or sp=quarantine after test period has been completed." )
}
if record != nil && len ( record . AggregateReportAddresses ) == 0 {
addf ( & r . DMARC . Warnings , "It is recommended you specify you would like aggregate reports about delivery success in the DMARC record, see instructions." )
}
2023-08-23 15:27:21 +03:00
dmarcr := dmarc . DefaultRecord
dmarcr . Policy = "reject"
var extInstr string
2023-02-03 17:54:34 +03:00
if domConf . DMARC != nil {
2023-08-23 15:27:21 +03:00
// If the domain is in a different Organizational Domain, the receiving domain
// needs a special DNS record to opt-in to receiving reports. We check for that
// record.
// ../rfc/7489:1541
2023-12-05 15:35:58 +03:00
orgDom := publicsuffix . Lookup ( ctx , log . Logger , domain )
destOrgDom := publicsuffix . Lookup ( ctx , log . Logger , domConf . DMARC . DNSDomain )
2023-08-23 15:27:21 +03:00
if orgDom != destOrgDom {
2023-12-05 15:35:58 +03:00
accepts , status , _ , _ , _ , err := dmarc . LookupExternalReportsAccepted ( ctx , log . Logger , resolver , domain , domConf . DMARC . DNSDomain )
2023-08-23 15:27:21 +03:00
if status != dmarc . StatusNone {
addf ( & r . DMARC . Errors , "Checking if external destination accepts reports: %s" , err )
} else if ! accepts {
addf ( & r . DMARC . Errors , "External destination does not accept reports (%s)" , err )
}
2023-10-13 09:16:46 +03:00
extInstr = fmt . Sprintf ( "Ensure a DNS TXT record exists in the domain of the destination address to opt-in to receiving reports from this domain:\n\n\t%s._report._dmarc.%s. TXT \"v=DMARC1;\"\n\n" , domain . ASCII , domConf . DMARC . DNSDomain . ASCII )
2023-08-23 15:27:21 +03:00
}
uri := url . URL {
Scheme : "mailto" ,
Opaque : smtp . NewAddress ( domConf . DMARC . ParsedLocalpart , domConf . DMARC . DNSDomain ) . Pack ( false ) ,
}
2023-11-10 22:25:06 +03:00
uristr := uri . String ( )
2023-08-23 15:27:21 +03:00
dmarcr . AggregateReportAddresses = [ ] dmarc . URI {
2023-11-10 22:25:06 +03:00
{ Address : uristr , MaxSize : 10 , Unit : "m" } ,
}
if record != nil {
found := false
for _ , addr := range record . AggregateReportAddresses {
if addr . Address == uristr {
found = true
break
}
}
if ! found {
addf ( & r . DMARC . Errors , "Configured DMARC reporting address is not present in record." )
}
2023-08-23 15:27:21 +03:00
}
2023-01-30 16:27:06 +03:00
} else {
2023-08-23 15:27:21 +03:00
addf ( & r . DMARC . Instructions , ` Configure a DMARC destination in domain in config file. ` )
2023-01-30 16:27:06 +03:00
}
2023-10-13 09:16:46 +03:00
instr := fmt . Sprintf ( "Ensure a DNS TXT record like the following exists:\n\n\t_dmarc TXT %s\n\nYou can start with testing mode by replacing p=reject with p=none. You can also request for the policy to be applied to a percentage of emails instead of all, by adding pct=X, with X between 0 and 100. Keep in mind that receiving mail servers will apply some anti-spam assessment regardless of the policy and whether it is applied to the message. The ruf= part requests daily aggregate reports to be sent to the specified address, which is automatically configured and reports automatically analyzed." , mox . TXTStrings ( dmarcr . String ( ) ) )
2023-01-30 16:27:06 +03:00
addf ( & r . DMARC . Instructions , instr )
2023-08-23 15:27:21 +03:00
if extInstr != "" {
addf ( & r . DMARC . Instructions , extInstr )
}
2023-01-30 16:27:06 +03:00
} ( )
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
checkTLSRPT := func ( result * TLSRPTCheckResult , dom dns . Domain , address smtp . Address , isHost bool ) {
2023-01-30 16:27:06 +03:00
defer logPanic ( ctx )
defer wg . Done ( )
2023-12-05 15:35:58 +03:00
record , txt , err := tlsrpt . Lookup ( ctx , log . Logger , resolver , dom )
2023-01-30 16:27:06 +03:00
if err != nil {
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
addf ( & result . Errors , "Looking up TLSRPT record: %s" , err )
2023-01-30 16:27:06 +03:00
}
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
result . TXT = txt
2023-01-30 16:27:06 +03:00
if record != nil {
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
result . Record = & TLSRPTRecord { * record }
2023-01-30 16:27:06 +03:00
}
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
instr := ` TLSRPT is an opt-in mechanism to request feedback about TLS connectivity from remote SMTP servers when they connect to us. It allows detecting delivery problems and unwanted downgrades to plaintext SMTP connections. With TLSRPT you configure an email address to which reports should be sent. Remote SMTP servers will send a report once a day with the number of successful connections, and the number of failed connections including details that should help debugging/resolving any issues. Both the mail host (e.g. mail.domain.example) and a recipient domain (e.g. domain.example, with an MX record pointing to mail.domain.example) can have a TLSRPT record. The TLSRPT record for the hosts is for reporting about DANE, the TLSRPT record for the domain is for MTA-STS. `
var zeroaddr smtp . Address
if address != zeroaddr {
2023-08-23 15:27:21 +03:00
// TLSRPT does not require validation of reporting addresses outside the domain.
// ../rfc/8460:1463
uri := url . URL {
Scheme : "mailto" ,
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
Opaque : address . Pack ( false ) ,
2023-08-23 15:27:21 +03:00
}
2023-11-10 22:25:06 +03:00
rua := tlsrpt . RUA ( uri . String ( ) )
2023-08-23 15:27:21 +03:00
tlsrptr := & tlsrpt . Record {
Version : "TLSRPTv1" ,
2023-11-10 22:25:06 +03:00
RUAs : [ ] [ ] tlsrpt . RUA { { rua } } ,
2023-08-23 15:27:21 +03:00
}
instr += fmt . Sprintf ( `
2023-01-30 16:27:06 +03:00
Ensure a DNS TXT record like the following exists :
2023-10-13 09:16:46 +03:00
_smtp . _tls TXT % s
2023-01-30 16:27:06 +03:00
` , mox . TXTStrings ( tlsrptr . String ( ) ) )
2023-11-10 22:25:06 +03:00
if err == nil {
found := false
RUA :
for _ , l := range record . RUAs {
for _ , e := range l {
if e == rua {
found = true
break RUA
}
}
}
if ! found {
addf ( & result . Errors , ` Configured reporting address is not present in TLSRPT record. ` )
}
}
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
} else if isHost {
addf ( & result . Errors , ` Configure a host TLSRPT localpart in static mox.conf config file. ` )
2023-08-23 15:27:21 +03:00
} else {
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
addf ( & result . Errors , ` Configure a domain TLSRPT destination in domains.conf config file. ` )
2023-08-23 15:27:21 +03:00
}
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
addf ( & result . Instructions , instr )
}
2023-12-16 13:53:14 +03:00
// Host TLSRPT
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
wg . Add ( 1 )
var hostTLSRPTAddr smtp . Address
if mox . Conf . Static . HostTLSRPT . Localpart != "" {
hostTLSRPTAddr = smtp . NewAddress ( mox . Conf . Static . HostTLSRPT . ParsedLocalpart , mox . Conf . Static . HostnameDomain )
}
go checkTLSRPT ( & r . HostTLSRPT , mox . Conf . Static . HostnameDomain , hostTLSRPTAddr , true )
// Domain TLSRPT
wg . Add ( 1 )
var domainTLSRPTAddr smtp . Address
if domConf . TLSRPT != nil {
domainTLSRPTAddr = smtp . NewAddress ( domConf . TLSRPT . ParsedLocalpart , domain )
}
go checkTLSRPT ( & r . DomainTLSRPT , domain , domainTLSRPTAddr , false )
2023-01-30 16:27:06 +03:00
// MTA-STS
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
2023-12-05 15:35:58 +03:00
record , txt , err := mtasts . LookupRecord ( ctx , log . Logger , resolver , domain )
2023-01-30 16:27:06 +03:00
if err != nil {
addf ( & r . MTASTS . Errors , "Looking up MTA-STS record: %s" , err )
}
r . MTASTS . TXT = txt
if record != nil {
r . MTASTS . Record = & MTASTSRecord { * record }
}
2023-12-05 15:35:58 +03:00
policy , text , err := mtasts . FetchPolicy ( ctx , log . Logger , domain )
2023-01-30 16:27:06 +03:00
if err != nil {
addf ( & r . MTASTS . Errors , "Fetching MTA-STS policy: %s" , err )
} else if policy . Mode == mtasts . ModeNone {
addf ( & r . MTASTS . Warnings , "MTA-STS policy is present, but does not require TLS." )
} else if policy . Mode == mtasts . ModeTesting {
addf ( & r . MTASTS . Warnings , "MTA-STS policy is in testing mode, do not forget to change to mode enforce after testing period." )
}
r . MTASTS . PolicyText = text
r . MTASTS . Policy = policy
if policy != nil && policy . Mode != mtasts . ModeNone {
if ! policy . Matches ( mox . Conf . Static . HostnameDomain ) {
addf ( & r . MTASTS . Warnings , "Configured hostname is missing from policy MX list." )
}
if policy . MaxAgeSeconds <= 24 * 3600 {
addf ( & r . MTASTS . Warnings , "Policy has a MaxAge of less than 1 day. For stable configurations, the recommended period is in weeks." )
}
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
mxl , _ , _ := resolver . LookupMX ( ctx , domain . ASCII + "." )
2023-01-30 16:27:06 +03:00
// We do not check for errors, the MX check will complain about mx errors, we assume we will get the same error here.
mxs := map [ dns . Domain ] struct { } { }
for _ , mx := range mxl {
2023-02-03 17:54:34 +03:00
d , err := dns . ParseDomain ( strings . TrimSuffix ( mx . Host , "." ) )
2023-01-30 16:27:06 +03:00
if err != nil {
addf ( & r . MTASTS . Warnings , "MX record %q is invalid: %s" , mx . Host , err )
continue
}
2023-02-03 17:54:34 +03:00
mxs [ d ] = struct { } { }
2023-01-30 16:27:06 +03:00
}
for mx := range mxs {
if ! policy . Matches ( mx ) {
addf ( & r . MTASTS . Warnings , "MX record %q does not match MTA-STS policy MX list." , mx )
}
}
for _ , mx := range policy . MX {
if mx . Wildcard {
continue
}
if _ , ok := mxs [ mx . Domain ] ; ! ok {
addf ( & r . MTASTS . Warnings , "MX %q in MTA-STS policy is not in MX record." , mx )
}
}
}
2023-02-28 22:43:31 +03:00
intro := ` MTA - STS is an opt - in mechanism to signal to remote SMTP servers which MX records are valid and that they must use the STARTTLS command and verify the TLS connection . Email servers should already be using STARTTLS to protect communication , but active attackers can , and have in the past , removed the indication of support for the optional STARTTLS support from SMTP sessions , or added additional MX records in DNS responses . MTA - STS protects against compromised DNS and compromised plaintext SMTP sessions , but not against compromised internet PKI infrastructure . If an attacker controls a certificate authority , and is willing to use it , MTA - STS does not prevent an attack . MTA - STS does not protect against attackers on first contact with a domain . Only on subsequent contacts , with MTA - STS policies in the cache , can attacks can be detected .
2023-01-30 16:27:06 +03:00
After enabling MTA - STS for this domain , remote SMTP servers may still deliver in plain text , without TLS - protection . MTA - STS is an opt - in mechanism , not all servers support it yet .
You can opt - in to MTA - STS by creating a DNS record , _mta - sts . < domain > , and serving a policy at https : //mta-sts.<domain>/.well-known/mta-sts.txt. Mox will serve the policy, you must create the DNS records.
You can start with a policy in "testing" mode . Remote SMTP servers will apply the MTA - STS policy , but not abort delivery in case of failure . Instead , you will receive a report if you have TLSRPT configured . By starting in testing mode for a representative period , verifying all mail can be deliverd , you can safely switch to "enforce" mode . While in enforce mode , plaintext deliveries to mox are refused .
The _mta - sts DNS TXT record has an "id" field . The id serves as a version of the policy . A policy specifies the mode : none , testing , enforce . For "none" , no TLS is required . A policy has a "max age" , indicating how long the policy can be cached . Allowing the policy to be cached for a long time provides stronger counter measures to active attackers , but reduces configuration change agility . After enabling "enforce" mode , remote SMTP servers may and will cache your policy for as long as "max age" was configured . Keep this in mind when enabling / disabling MTA - STS . To disable MTA - STS after having it enabled , publish a new record with mode "none" until all past policy expiration times have passed .
When enabling MTA - STS , or updating a policy , always update the policy first ( through a configuration change and reload / restart ) , and the DNS record second .
`
addf ( & r . MTASTS . Instructions , intro )
addf ( & r . MTASTS . Instructions , ` Enable a policy through the configuration file. For new deployments, it is best to start with mode "testing" while enabling TLSRPT. Start with a short "max_age", so updates to your policy are picked up quickly. When confidence in the deployment is high enough, switch to "enforce" mode and a longer "max age". A max age in the order of weeks is recommended. If you foresee a change to your setup in the future, requiring different policies or MX records, you may want to dial back the "max age" ahead of time, similar to how you would handle TTL's in DNS record updates. ` )
2023-10-13 09:16:46 +03:00
host := fmt . Sprintf ( "Ensure DNS CNAME/A/AAAA records exist that resolve mta-sts.%s to this mail server. For example:\n\n\t%s CNAME %s\n\n" , domain . ASCII , "mta-sts." + domain . ASCII + "." , mox . Conf . Static . HostnameDomain . ASCII + "." )
2023-01-30 16:27:06 +03:00
addf ( & r . MTASTS . Instructions , host )
mtastsr := mtasts . Record {
Version : "STSv1" ,
ID : time . Now ( ) . Format ( "20060102T150405" ) ,
}
2023-10-13 09:16:46 +03:00
dns := fmt . Sprintf ( "Ensure a DNS TXT record like the following exists:\n\n\t_mta-sts TXT %s\n\nConfigure the ID in the configuration file, it must be of the form [a-zA-Z0-9]{1,31}. It represents the version of the policy. For each policy change, you must change the ID to a new unique value. You could use a timestamp like 20220621T123000. When this field exists, an SMTP server will fetch a policy at https://mta-sts.%s/.well-known/mta-sts.txt. This policy is served by mox." , mox . TXTStrings ( mtastsr . String ( ) ) , domain . Name ( ) )
2023-01-30 16:27:06 +03:00
addf ( & r . MTASTS . Instructions , dns )
} ( )
// SRVConf
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
type srvReq struct {
name string
port uint16
host string
srvs [ ] * net . SRV
err error
}
// We'll assume if any submissions is configured, it is public. Same for imap. And
// if not, that there is a plain option.
var submissions , imaps bool
for _ , l := range mox . Conf . Static . Listeners {
if l . TLS != nil && l . Submissions . Enabled {
submissions = true
}
if l . TLS != nil && l . IMAPS . Enabled {
imaps = true
}
}
srvhost := func ( ok bool ) string {
if ok {
return mox . Conf . Static . HostnameDomain . ASCII + "."
}
return "."
}
var reqs = [ ] srvReq {
{ name : "_submissions" , port : 465 , host : srvhost ( submissions ) } ,
{ name : "_submission" , port : 587 , host : srvhost ( ! submissions ) } ,
{ name : "_imaps" , port : 993 , host : srvhost ( imaps ) } ,
{ name : "_imap" , port : 143 , host : srvhost ( ! imaps ) } ,
{ name : "_pop3" , port : 110 , host : "." } ,
{ name : "_pop3s" , port : 995 , host : "." } ,
}
var srvwg sync . WaitGroup
srvwg . Add ( len ( reqs ) )
for i := range reqs {
go func ( i int ) {
defer srvwg . Done ( )
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
_ , reqs [ i ] . srvs , _ , reqs [ i ] . err = resolver . LookupSRV ( ctx , reqs [ i ] . name [ 1 : ] , "tcp" , domain . ASCII + "." )
2023-01-30 16:27:06 +03:00
} ( i )
}
srvwg . Wait ( )
instr := "Ensure DNS records like the following exist:\n\n"
2023-12-31 13:55:22 +03:00
r . SRVConf . SRVs = map [ string ] [ ] net . SRV { }
2023-01-30 16:27:06 +03:00
for _ , req := range reqs {
2023-02-03 17:54:34 +03:00
name := req . name + "_.tcp." + domain . ASCII
2023-10-13 09:16:46 +03:00
instr += fmt . Sprintf ( "\t%s._tcp.%-*s SRV 0 1 %d %s\n" , req . name , len ( "_submissions" ) - len ( req . name ) + len ( domain . ASCII + "." ) , domain . ASCII + "." , req . port , req . host )
2023-12-31 13:55:22 +03:00
r . SRVConf . SRVs [ req . name ] = unptr ( req . srvs )
2023-01-30 16:27:06 +03:00
if err != nil {
addf ( & r . SRVConf . Errors , "Looking up SRV record %q: %s" , name , err )
} else if len ( req . srvs ) == 0 {
addf ( & r . SRVConf . Errors , "Missing SRV record %q" , name )
} else if len ( req . srvs ) != 1 || req . srvs [ 0 ] . Target != req . host || req . srvs [ 0 ] . Port != req . port {
addf ( & r . SRVConf . Errors , "Unexpected SRV record(s) for %q" , name )
}
}
addf ( & r . SRVConf . Instructions , instr )
} ( )
// Autoconf
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
if domConf . ClientSettingsDomain != "" {
addf ( & r . Autoconf . Instructions , "Ensure a DNS CNAME record like the following exists:\n\n\t%s CNAME %s\n\nNote: the trailing dot is relevant, it makes the host name absolute instead of relative to the domain name." , domConf . ClientSettingsDNSDomain . ASCII + "." , mox . Conf . Static . HostnameDomain . ASCII + "." )
ips , ourIPs , notOurIPs , err := lookupIPs ( & r . Autoconf . Errors , domConf . ClientSettingsDNSDomain . ASCII + "." )
if err != nil {
addf ( & r . Autoconf . Errors , "Looking up client settings DNS CNAME: %s" , err )
}
r . Autoconf . ClientSettingsDomainIPs = ips
if ! isUnspecifiedNAT {
if len ( ourIPs ) == 0 {
addf ( & r . Autoconf . Errors , "Client settings domain does not point to one of our IPs." )
} else if len ( notOurIPs ) > 0 {
addf ( & r . Autoconf . Errors , "Client settings domain points to some IPs that are not ours: %v" , notOurIPs )
}
}
}
2023-10-13 09:16:46 +03:00
addf ( & r . Autoconf . Instructions , "Ensure a DNS CNAME record like the following exists:\n\n\tautoconfig.%s CNAME %s\n\nNote: the trailing dot is relevant, it makes the host name absolute instead of relative to the domain name." , domain . ASCII + "." , mox . Conf . Static . HostnameDomain . ASCII + "." )
2023-01-30 16:27:06 +03:00
2023-02-03 17:54:34 +03:00
host := "autoconfig." + domain . ASCII + "."
2023-01-30 16:27:06 +03:00
ips , ourIPs , notOurIPs , err := lookupIPs ( & r . Autoconf . Errors , host )
if err != nil {
addf ( & r . Autoconf . Errors , "Looking up autoconfig host: %s" , err )
return
}
r . Autoconf . IPs = ips
2023-08-11 11:13:17 +03:00
if ! isUnspecifiedNAT {
2023-03-09 17:24:06 +03:00
if len ( ourIPs ) == 0 {
addf ( & r . Autoconf . Errors , "Autoconfig does not point to one of our IPs." )
} else if len ( notOurIPs ) > 0 {
addf ( & r . Autoconf . Errors , "Autoconfig points to some IPs that are not ours: %v" , notOurIPs )
}
2023-01-30 16:27:06 +03:00
}
2023-02-03 17:54:34 +03:00
checkTLS ( & r . Autoconf . Errors , "autoconfig." + domain . ASCII , ips , "443" )
2023-01-30 16:27:06 +03:00
} ( )
// Autodiscover
wg . Add ( 1 )
go func ( ) {
defer logPanic ( ctx )
defer wg . Done ( )
2023-10-13 09:51:02 +03:00
addf ( & r . Autodiscover . Instructions , "Ensure DNS records like the following exist:\n\n\t_autodiscover._tcp.%s SRV 0 1 443 %s\n\tautoconfig.%s CNAME %s\n\nNote: the trailing dots are relevant, it makes the host names absolute instead of relative to the domain name." , domain . ASCII + "." , mox . Conf . Static . HostnameDomain . ASCII + "." , domain . ASCII + "." , mox . Conf . Static . HostnameDomain . ASCII + "." )
2023-01-30 16:27:06 +03:00
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
_ , srvs , _ , err := resolver . LookupSRV ( ctx , "autodiscover" , "tcp" , domain . ASCII + "." )
2023-01-30 16:27:06 +03:00
if err != nil {
addf ( & r . Autodiscover . Errors , "Looking up SRV record %q: %s" , "autodiscover" , err )
return
}
match := false
for _ , srv := range srvs {
ips , ourIPs , notOurIPs , err := lookupIPs ( & r . Autodiscover . Errors , srv . Target )
if err != nil {
addf ( & r . Autodiscover . Errors , "Looking up target %q from SRV record: %s" , srv . Target , err )
continue
}
if srv . Port != 443 {
continue
}
match = true
r . Autodiscover . Records = append ( r . Autodiscover . Records , AutodiscoverSRV { * srv , ips } )
2023-08-11 11:13:17 +03:00
if ! isUnspecifiedNAT {
2023-03-09 17:24:06 +03:00
if len ( ourIPs ) == 0 {
addf ( & r . Autodiscover . Errors , "SRV target %q does not point to our IPs." , srv . Target )
} else if len ( notOurIPs ) > 0 {
addf ( & r . Autodiscover . Errors , "SRV target %q points to some IPs that are not ours: %v" , srv . Target , notOurIPs )
}
2023-01-30 16:27:06 +03:00
}
checkTLS ( & r . Autodiscover . Errors , strings . TrimSuffix ( srv . Target , "." ) , ips , "443" )
}
if ! match {
addf ( & r . Autodiscover . Errors , "No SRV record for port 443 for https." )
}
} ( )
wg . Wait ( )
return
}
// Domains returns all configured domain names, in UTF-8 for IDNA domains.
func ( Admin ) Domains ( ctx context . Context ) [ ] dns . Domain {
l := [ ] dns . Domain { }
for _ , s := range mox . Conf . Domains ( ) {
d , _ := dns . ParseDomain ( s )
l = append ( l , d )
}
return l
}
// Domain returns the dns domain for a (potentially unicode as IDNA) domain name.
func ( Admin ) Domain ( ctx context . Context , domain string ) dns . Domain {
d , err := dns . ParseDomain ( domain )
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , err , "parse domain" )
2023-01-30 16:27:06 +03:00
_ , ok := mox . Conf . Domain ( d )
if ! ok {
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , errors . New ( "no such domain" ) , "looking up domain" )
2023-01-30 16:27:06 +03:00
}
return d
}
2023-11-12 16:58:46 +03:00
// ParseDomain parses a domain, possibly an IDNA domain.
func ( Admin ) ParseDomain ( ctx context . Context , domain string ) dns . Domain {
d , err := dns . ParseDomain ( domain )
xcheckuserf ( ctx , err , "parse domain" )
return d
}
2023-03-29 22:11:43 +03:00
// DomainLocalparts returns the encoded localparts and accounts configured in domain.
func ( Admin ) DomainLocalparts ( ctx context . Context , domain string ) ( localpartAccounts map [ string ] string ) {
2023-01-30 16:27:06 +03:00
d , err := dns . ParseDomain ( domain )
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , err , "parsing domain" )
2023-01-30 16:27:06 +03:00
_ , ok := mox . Conf . Domain ( d )
if ! ok {
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , errors . New ( "no such domain" ) , "looking up domain" )
2023-01-30 16:27:06 +03:00
}
return mox . Conf . DomainLocalparts ( d )
}
// Accounts returns the names of all configured accounts.
func ( Admin ) Accounts ( ctx context . Context ) [ ] string {
l := mox . Conf . Accounts ( )
sort . Slice ( l , func ( i , j int ) bool {
return l [ i ] < l [ j ]
} )
return l
}
// Account returns the parsed configuration of an account.
2024-04-14 18:18:20 +03:00
func ( Admin ) Account ( ctx context . Context , account string ) ( accountConfig config . Account , diskUsage int64 ) {
2024-03-11 16:02:35 +03:00
log := pkglog . WithContext ( ctx )
acc , err := store . OpenAccount ( log , account )
if err != nil && errors . Is ( err , store . ErrAccountUnknown ) {
xcheckuserf ( ctx , err , "looking up account" )
2023-01-30 16:27:06 +03:00
}
2024-03-11 16:02:35 +03:00
xcheckf ( ctx , err , "open account" )
defer func ( ) {
err := acc . Close ( )
log . Check ( err , "closing account" )
} ( )
var ac config . Account
acc . WithRLock ( func ( ) {
ac , _ = mox . Conf . Account ( acc . Name )
err := acc . DB . Read ( ctx , func ( tx * bstore . Tx ) error {
du := store . DiskUsage { ID : 1 }
err := tx . Get ( & du )
diskUsage = du . MessageSize
return err
} )
xcheckf ( ctx , err , "get disk usage" )
} )
2023-01-30 16:27:06 +03:00
2024-04-14 18:18:20 +03:00
return ac , diskUsage
2023-01-30 16:27:06 +03:00
}
// ConfigFiles returns the paths and contents of the static and dynamic configuration files.
func ( Admin ) ConfigFiles ( ctx context . Context ) ( staticPath , dynamicPath , static , dynamic string ) {
buf0 , err := os . ReadFile ( mox . ConfigStaticPath )
xcheckf ( ctx , err , "read static config file" )
buf1 , err := os . ReadFile ( mox . ConfigDynamicPath )
xcheckf ( ctx , err , "read dynamic config file" )
return mox . ConfigStaticPath , mox . ConfigDynamicPath , string ( buf0 ) , string ( buf1 )
}
// MTASTSPolicies returns all mtasts policies from the cache.
func ( Admin ) MTASTSPolicies ( ctx context . Context ) ( records [ ] mtastsdb . PolicyRecord ) {
records , err := mtastsdb . PolicyRecords ( ctx )
xcheckf ( ctx , err , "fetching mtasts policies from database" )
return records
}
// TLSReports returns TLS reports overlapping with period start/end, for the given
2023-11-12 16:19:12 +03:00
// policy domain (or all domains if empty). The reports are sorted first by period
// end (most recent first), then by policy domain.
func ( Admin ) TLSReports ( ctx context . Context , start , end time . Time , policyDomain string ) ( reports [ ] tlsrptdb . TLSReportRecord ) {
var polDom dns . Domain
if policyDomain != "" {
var err error
polDom , err = dns . ParseDomain ( policyDomain )
xcheckuserf ( ctx , err , "parsing domain %q" , policyDomain )
}
records , err := tlsrptdb . RecordsPeriodDomain ( ctx , start , end , polDom )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "fetching tlsrpt report records from database" )
sort . Slice ( records , func ( i , j int ) bool {
iend := records [ i ] . Report . DateRange . End
jend := records [ j ] . Report . DateRange . End
if iend == jend {
return records [ i ] . Domain < records [ j ] . Domain
}
return iend . After ( jend )
} )
return records
}
// TLSReportID returns a single TLS report.
func ( Admin ) TLSReportID ( ctx context . Context , domain string , reportID int64 ) tlsrptdb . TLSReportRecord {
record , err := tlsrptdb . RecordID ( ctx , reportID )
if err == nil && record . Domain != domain {
err = bstore . ErrAbsent
}
2023-08-09 09:02:58 +03:00
if err == bstore . ErrAbsent {
xcheckuserf ( ctx , err , "fetching tls report from database" )
}
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "fetching tls report from database" )
return record
}
// TLSRPTSummary presents TLS reporting statistics for a single domain
// over a period.
type TLSRPTSummary struct {
2023-11-12 16:19:12 +03:00
PolicyDomain dns . Domain
2023-01-30 16:27:06 +03:00
Success int64
Failure int64
2023-11-12 16:58:46 +03:00
ResultTypeCounts map [ tlsrpt . ResultType ] int64
2023-01-30 16:27:06 +03:00
}
// TLSRPTSummaries returns a summary of received TLS reports overlapping with
// period start/end for one or all domains (when domain is empty).
// The returned summaries are ordered by domain name.
2023-11-12 16:19:12 +03:00
func ( Admin ) TLSRPTSummaries ( ctx context . Context , start , end time . Time , policyDomain string ) ( domainSummaries [ ] TLSRPTSummary ) {
var polDom dns . Domain
if policyDomain != "" {
var err error
polDom , err = dns . ParseDomain ( policyDomain )
xcheckuserf ( ctx , err , "parsing policy domain" )
}
reports , err := tlsrptdb . RecordsPeriodDomain ( ctx , start , end , polDom )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "fetching tlsrpt reports from database" )
2023-11-12 16:19:12 +03:00
summaries := map [ dns . Domain ] TLSRPTSummary { }
2023-01-30 16:27:06 +03:00
for _ , r := range reports {
2023-11-12 16:19:12 +03:00
dom , err := dns . ParseDomain ( r . Domain )
xcheckf ( ctx , err , "parsing domain %q" , r . Domain )
sum := summaries [ dom ]
sum . PolicyDomain = dom
2023-01-30 16:27:06 +03:00
for _ , result := range r . Report . Policies {
sum . Success += result . Summary . TotalSuccessfulSessionCount
sum . Failure += result . Summary . TotalFailureSessionCount
for _ , details := range result . FailureDetails {
if sum . ResultTypeCounts == nil {
2023-11-12 16:58:46 +03:00
sum . ResultTypeCounts = map [ tlsrpt . ResultType ] int64 { }
2023-01-30 16:27:06 +03:00
}
2023-11-12 16:58:46 +03:00
sum . ResultTypeCounts [ details . ResultType ] += details . FailedSessionCount
2023-01-30 16:27:06 +03:00
}
}
2023-11-12 16:19:12 +03:00
summaries [ dom ] = sum
2023-01-30 16:27:06 +03:00
}
sums := make ( [ ] TLSRPTSummary , 0 , len ( summaries ) )
for _ , sum := range summaries {
sums = append ( sums , sum )
}
sort . Slice ( sums , func ( i , j int ) bool {
2023-11-12 16:19:12 +03:00
return sums [ i ] . PolicyDomain . Name ( ) < sums [ j ] . PolicyDomain . Name ( )
2023-01-30 16:27:06 +03:00
} )
return sums
}
// DMARCReports returns DMARC reports overlapping with period start/end, for the
// given domain (or all domains if empty). The reports are sorted first by period
// end (most recent first), then by domain.
func ( Admin ) DMARCReports ( ctx context . Context , start , end time . Time , domain string ) ( reports [ ] dmarcdb . DomainFeedback ) {
reports , err := dmarcdb . RecordsPeriodDomain ( ctx , start , end , domain )
2023-11-01 19:55:40 +03:00
xcheckf ( ctx , err , "fetching dmarc aggregate reports from database" )
2023-01-30 16:27:06 +03:00
sort . Slice ( reports , func ( i , j int ) bool {
iend := reports [ i ] . ReportMetadata . DateRange . End
jend := reports [ j ] . ReportMetadata . DateRange . End
if iend == jend {
return reports [ i ] . Domain < reports [ j ] . Domain
}
return iend > jend
} )
return reports
}
// DMARCReportID returns a single DMARC report.
func ( Admin ) DMARCReportID ( ctx context . Context , domain string , reportID int64 ) ( report dmarcdb . DomainFeedback ) {
report , err := dmarcdb . RecordID ( ctx , reportID )
if err == nil && report . Domain != domain {
err = bstore . ErrAbsent
}
2023-08-09 09:02:58 +03:00
if err == bstore . ErrAbsent {
2023-11-01 19:55:40 +03:00
xcheckuserf ( ctx , err , "fetching dmarc aggregate report from database" )
2023-08-09 09:02:58 +03:00
}
2023-11-01 19:55:40 +03:00
xcheckf ( ctx , err , "fetching dmarc aggregate report from database" )
2023-01-30 16:27:06 +03:00
return report
}
// DMARCSummary presents DMARC aggregate reporting statistics for a single domain
// over a period.
type DMARCSummary struct {
Domain string
Total int
DispositionNone int
DispositionQuarantine int
DispositionReject int
DKIMFail int
SPFFail int
PolicyOverrides map [ dmarcrpt . PolicyOverride ] int
}
// DMARCSummaries returns a summary of received DMARC reports overlapping with
// period start/end for one or all domains (when domain is empty).
// The returned summaries are ordered by domain name.
func ( Admin ) DMARCSummaries ( ctx context . Context , start , end time . Time , domain string ) ( domainSummaries [ ] DMARCSummary ) {
reports , err := dmarcdb . RecordsPeriodDomain ( ctx , start , end , domain )
2023-11-01 19:55:40 +03:00
xcheckf ( ctx , err , "fetching dmarc aggregate reports from database" )
2023-01-30 16:27:06 +03:00
summaries := map [ string ] DMARCSummary { }
for _ , r := range reports {
sum := summaries [ r . Domain ]
sum . Domain = r . Domain
for _ , record := range r . Records {
n := record . Row . Count
sum . Total += n
switch record . Row . PolicyEvaluated . Disposition {
case dmarcrpt . DispositionNone :
sum . DispositionNone += n
case dmarcrpt . DispositionQuarantine :
sum . DispositionQuarantine += n
case dmarcrpt . DispositionReject :
sum . DispositionReject += n
}
if record . Row . PolicyEvaluated . DKIM == dmarcrpt . DMARCFail {
sum . DKIMFail += n
}
if record . Row . PolicyEvaluated . SPF == dmarcrpt . DMARCFail {
sum . SPFFail += n
}
for _ , reason := range record . Row . PolicyEvaluated . Reasons {
if sum . PolicyOverrides == nil {
sum . PolicyOverrides = map [ dmarcrpt . PolicyOverride ] int { }
}
sum . PolicyOverrides [ reason . Type ] += n
}
}
summaries [ r . Domain ] = sum
}
sums := make ( [ ] DMARCSummary , 0 , len ( summaries ) )
for _ , sum := range summaries {
sums = append ( sums , sum )
}
sort . Slice ( sums , func ( i , j int ) bool {
return sums [ i ] . Domain < sums [ j ] . Domain
} )
return sums
}
// Reverse is the result of a reverse lookup.
type Reverse struct {
Hostnames [ ] string
// In the future, we can add a iprev-validated host name, and possibly the IPs of the host names.
}
// LookupIP does a reverse lookup of ip.
func ( Admin ) LookupIP ( ctx context . Context , ip string ) Reverse {
2023-12-05 15:35:58 +03:00
resolver := dns . StrictResolver { Pkg : "webadmin" , Log : pkglog . WithContext ( ctx ) . Logger }
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
names , _ , err := resolver . LookupAddr ( ctx , ip )
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , err , "looking up ip" )
2023-01-30 16:27:06 +03:00
return Reverse { names }
}
// DNSBLStatus returns the IPs from which outgoing connections may be made and
// their current status in DNSBLs that are configured. The IPs are typically the
// configured listen IPs, or otherwise IPs on the machines network interfaces, with
// internal/private IPs removed.
//
// The returned value maps IPs to per DNSBL statuses, where "pass" means not listed and
// anything else is an error string, e.g. "fail: ..." or "temperror: ...".
2024-03-05 18:30:38 +03:00
func ( Admin ) DNSBLStatus ( ctx context . Context ) ( results map [ string ] map [ string ] string , using , monitoring [ ] dns . Domain ) {
2023-12-05 15:35:58 +03:00
log := mlog . New ( "webadmin" , nil ) . WithContext ( ctx )
resolver := dns . StrictResolver { Pkg : "check" , Log : log . Logger }
return dnsblsStatus ( ctx , log , resolver )
2023-01-30 16:27:06 +03:00
}
2024-03-05 18:30:38 +03:00
func dnsblsStatus ( ctx context . Context , log mlog . Log , resolver dns . Resolver ) ( results map [ string ] map [ string ] string , using , monitoring [ ] dns . Domain ) {
2023-01-30 16:27:06 +03:00
// todo: check health before using dnsbl?
2024-03-05 18:30:38 +03:00
using = mox . Conf . Static . Listeners [ "public" ] . SMTP . DNSBLZones
zones := append ( [ ] dns . Domain { } , using ... )
for _ , zone := range mox . Conf . MonitorDNSBLs ( ) {
if ! slices . Contains ( zones , zone ) {
zones = append ( zones , zone )
monitoring = append ( monitoring , zone )
2023-01-30 16:27:06 +03:00
}
}
r := map [ string ] map [ string ] string { }
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
for _ , ip := range xsendingIPs ( ctx ) {
2023-01-30 16:27:06 +03:00
if ip . IsLoopback ( ) || ip . IsPrivate ( ) {
continue
}
ipstr := ip . String ( )
r [ ipstr ] = map [ string ] string { }
2024-03-05 18:30:38 +03:00
for _ , zone := range zones {
2023-12-05 15:35:58 +03:00
status , expl , err := dnsbl . Lookup ( ctx , log . Logger , resolver , zone , ip )
2023-01-30 16:27:06 +03:00
result := string ( status )
if err != nil {
result += ": " + err . Error ( )
}
if expl != "" {
result += ": " + expl
}
2023-03-09 22:18:34 +03:00
r [ ipstr ] [ zone . LogString ( ) ] = result
2023-01-30 16:27:06 +03:00
}
}
2024-03-05 18:30:38 +03:00
return r , using , monitoring
}
func ( Admin ) MonitorDNSBLsSave ( ctx context . Context , text string ) {
var zones [ ] dns . Domain
publicZones := mox . Conf . Static . Listeners [ "public" ] . SMTP . DNSBLZones
for _ , line := range strings . Split ( text , "\n" ) {
line = strings . TrimSpace ( line )
if line == "" {
continue
}
d , err := dns . ParseDomain ( line )
xcheckuserf ( ctx , err , "parsing dnsbl zone %s" , line )
if slices . Contains ( zones , d ) {
xusererrorf ( ctx , "duplicate dnsbl zone %s" , line )
}
if slices . Contains ( publicZones , d ) {
xusererrorf ( ctx , "dnsbl zone %s already present in public listener" , line )
}
zones = append ( zones , d )
}
err := mox . MonitorDNSBLsSave ( ctx , zones )
xcheckf ( ctx , err , "saving monitoring dnsbl zones" )
2023-01-30 16:27:06 +03:00
}
// DomainRecords returns lines describing DNS records that should exist for the
// configured domain.
func ( Admin ) DomainRecords ( ctx context . Context , domain string ) [ ] string {
2023-12-21 17:16:30 +03:00
log := pkglog . WithContext ( ctx )
return DomainRecords ( ctx , log , domain )
}
// DomainRecords is the implementation of API function Admin.DomainRecords, taking
// a logger.
func DomainRecords ( ctx context . Context , log mlog . Log , domain string ) [ ] string {
2023-01-30 16:27:06 +03:00
d , err := dns . ParseDomain ( domain )
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , err , "parsing domain" )
2023-01-30 16:27:06 +03:00
dc , ok := mox . Conf . Domain ( d )
if ! ok {
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , errors . New ( "unknown domain" ) , "lookup domain" )
2023-01-30 16:27:06 +03:00
}
2023-12-05 15:35:58 +03:00
resolver := dns . StrictResolver { Pkg : "webadmin" , Log : pkglog . WithContext ( ctx ) . Logger }
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
_ , result , err := resolver . LookupTXT ( ctx , domain + "." )
if ! dns . IsNotFound ( err ) {
xcheckf ( ctx , err , "looking up record to determine if dnssec is implemented" )
}
2023-12-21 17:16:30 +03:00
var certIssuerDomainName , acmeAccountURI string
public := mox . Conf . Static . Listeners [ "public" ]
if public . TLS != nil && public . TLS . ACME != "" {
acme , ok := mox . Conf . Static . ACME [ public . TLS . ACME ]
if ok && acme . Manager . Manager . Client != nil {
certIssuerDomainName = acme . IssuerDomainName
acc , err := acme . Manager . Manager . Client . GetReg ( ctx , "" )
log . Check ( err , "get public acme account" )
if err == nil {
acmeAccountURI = acc . URI
}
}
}
records , err := mox . DomainRecords ( dc , d , result . Authentic , certIssuerDomainName , acmeAccountURI )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "dns records" )
return records
}
// DomainAdd adds a new domain and reloads the configuration.
func ( Admin ) DomainAdd ( ctx context . Context , domain , accountName , localpart string ) {
d , err := dns . ParseDomain ( domain )
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , err , "parsing domain" )
2023-01-30 16:27:06 +03:00
2024-03-08 23:08:40 +03:00
err = mox . DomainAdd ( ctx , d , accountName , smtp . Localpart ( norm . NFC . String ( localpart ) ) )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "adding domain" )
}
// DomainRemove removes an existing domain and reloads the configuration.
func ( Admin ) DomainRemove ( ctx context . Context , domain string ) {
d , err := dns . ParseDomain ( domain )
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , err , "parsing domain" )
2023-01-30 16:27:06 +03:00
err = mox . DomainRemove ( ctx , d )
xcheckf ( ctx , err , "removing domain" )
}
2023-09-23 13:05:40 +03:00
// AccountAdd adds existing a new account, with an initial email address, and
// reloads the configuration.
2023-01-30 16:27:06 +03:00
func ( Admin ) AccountAdd ( ctx context . Context , accountName , address string ) {
err := mox . AccountAdd ( ctx , accountName , address )
xcheckf ( ctx , err , "adding account" )
}
// AccountRemove removes an existing account and reloads the configuration.
func ( Admin ) AccountRemove ( ctx context . Context , accountName string ) {
err := mox . AccountRemove ( ctx , accountName )
xcheckf ( ctx , err , "removing account" )
}
// AddressAdd adds a new address to the account, which must already exist.
func ( Admin ) AddressAdd ( ctx context . Context , address , accountName string ) {
err := mox . AddressAdd ( ctx , address , accountName )
xcheckf ( ctx , err , "adding address" )
}
// AddressRemove removes an existing address.
func ( Admin ) AddressRemove ( ctx context . Context , address string ) {
err := mox . AddressRemove ( ctx , address )
xcheckf ( ctx , err , "removing address" )
}
// SetPassword saves a new password for an account, invalidating the previous password.
// Sessions are not interrupted, and will keep working. New login attempts must use the new password.
// Password must be at least 8 characters.
func ( Admin ) SetPassword ( ctx context . Context , accountName , password string ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
if len ( password ) < 8 {
2024-03-05 18:30:38 +03:00
xusererrorf ( ctx , "message must be at least 8 characters" )
2023-01-30 16:27:06 +03:00
}
2023-12-05 15:35:58 +03:00
acc , err := store . OpenAccount ( log , accountName )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "open account" )
2023-02-16 15:22:00 +03:00
defer func ( ) {
err := acc . Close ( )
2023-12-05 15:35:58 +03:00
log . WithContext ( ctx ) . Check ( err , "closing account" )
2023-02-16 15:22:00 +03:00
} ( )
2023-12-05 15:35:58 +03:00
err = acc . SetPassword ( log , password )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "setting password" )
}
2024-03-16 22:24:07 +03:00
// AccountSettingsSave set new settings for an account that only an admin can set.
func ( Admin ) AccountSettingsSave ( ctx context . Context , accountName string , maxOutgoingMessagesPerDay , maxFirstTimeRecipientsPerDay int , maxMsgSize int64 , firstTimeSenderDelay bool ) {
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
err := mox . AccountSave ( ctx , accountName , func ( acc * config . Account ) {
acc . MaxOutgoingMessagesPerDay = maxOutgoingMessagesPerDay
acc . MaxFirstTimeRecipientsPerDay = maxFirstTimeRecipientsPerDay
acc . QuotaMessageSize = maxMsgSize
acc . NoFirstTimeSenderDelay = ! firstTimeSenderDelay
} )
2024-03-16 22:24:07 +03:00
xcheckf ( ctx , err , "saving account settings" )
2023-03-28 21:50:36 +03:00
}
2023-09-23 13:05:40 +03:00
// ClientConfigsDomain returns configurations for email clients, IMAP and
2023-01-30 16:27:06 +03:00
// Submission (SMTP) for the domain.
2023-09-23 13:05:40 +03:00
func ( Admin ) ClientConfigsDomain ( ctx context . Context , domain string ) mox . ClientConfigs {
2023-01-30 16:27:06 +03:00
d , err := dns . ParseDomain ( domain )
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , err , "parsing domain" )
2023-01-30 16:27:06 +03:00
2023-09-23 13:05:40 +03:00
cc , err := mox . ClientConfigsDomain ( d )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "client config for domain" )
return cc
}
2024-03-18 10:50:42 +03:00
// QueueSize returns the number of messages currently in the outgoing queue.
func ( Admin ) QueueSize ( ctx context . Context ) int {
n , err := queue . Count ( ctx )
2023-01-30 16:27:06 +03:00
xcheckf ( ctx , err , "listing messages in queue" )
2024-03-18 10:50:42 +03:00
return n
}
// QueueHoldRuleList lists the hold rules.
func ( Admin ) QueueHoldRuleList ( ctx context . Context ) [ ] queue . HoldRule {
l , err := queue . HoldRuleList ( ctx )
xcheckf ( ctx , err , "listing queue hold rules" )
2023-01-30 16:27:06 +03:00
return l
}
2024-03-18 10:50:42 +03:00
// QueueHoldRuleAdd adds a hold rule. Newly submitted and existing messages
// matching the hold rule will be marked "on hold".
func ( Admin ) QueueHoldRuleAdd ( ctx context . Context , hr queue . HoldRule ) queue . HoldRule {
var err error
hr . SenderDomain , err = dns . ParseDomain ( hr . SenderDomainStr )
xcheckuserf ( ctx , err , "parsing sender domain %q" , hr . SenderDomainStr )
hr . RecipientDomain , err = dns . ParseDomain ( hr . RecipientDomainStr )
xcheckuserf ( ctx , err , "parsing recipient domain %q" , hr . RecipientDomainStr )
log := pkglog . WithContext ( ctx )
hr , err = queue . HoldRuleAdd ( ctx , log , hr )
xcheckf ( ctx , err , "adding queue hold rule" )
return hr
}
// QueueHoldRuleRemove removes a hold rule. The Hold field of messages in
// the queue are not changed.
func ( Admin ) QueueHoldRuleRemove ( ctx context . Context , holdRuleID int64 ) {
log := pkglog . WithContext ( ctx )
err := queue . HoldRuleRemove ( ctx , log , holdRuleID )
xcheckf ( ctx , err , "removing queue hold rule" )
}
// QueueList returns the messages currently in the outgoing queue.
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
func ( Admin ) QueueList ( ctx context . Context , filter queue . Filter , sort queue . Sort ) [ ] queue . Msg {
l , err := queue . List ( ctx , filter , sort )
2023-02-08 21:42:21 +03:00
xcheckf ( ctx , err , "listing messages in queue" )
2024-03-18 10:50:42 +03:00
return l
}
// QueueNextAttemptSet sets a new time for next delivery attempt of matching
// messages from the queue.
func ( Admin ) QueueNextAttemptSet ( ctx context . Context , filter queue . Filter , minutes int ) ( affected int ) {
n , err := queue . NextAttemptSet ( ctx , filter , time . Now ( ) . Add ( time . Duration ( minutes ) * time . Minute ) )
xcheckf ( ctx , err , "setting new next delivery attempt time for matching messages in queue" )
2023-02-08 21:42:21 +03:00
return n
}
2024-03-18 10:50:42 +03:00
// QueueNextAttemptAdd adds a duration to the time of next delivery attempt of
// matching messages from the queue.
func ( Admin ) QueueNextAttemptAdd ( ctx context . Context , filter queue . Filter , minutes int ) ( affected int ) {
n , err := queue . NextAttemptAdd ( ctx , filter , time . Duration ( minutes ) * time . Minute )
xcheckf ( ctx , err , "adding duration to next delivery attempt for matching messages in queue" )
return n
2023-01-30 16:27:06 +03:00
}
2024-03-18 10:50:42 +03:00
// QueueHoldSet sets the Hold field of matching messages in the queue.
func ( Admin ) QueueHoldSet ( ctx context . Context , filter queue . Filter , onHold bool ) ( affected int ) {
n , err := queue . HoldSet ( ctx , filter , onHold )
xcheckf ( ctx , err , "changing onhold for matching messages in queue" )
return n
}
// QueueFail fails delivery for matching messages, causing DSNs to be sent.
func ( Admin ) QueueFail ( ctx context . Context , filter queue . Filter ) ( affected int ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2024-03-18 10:50:42 +03:00
n , err := queue . Fail ( ctx , log , filter )
xcheckf ( ctx , err , "drop messages from queue" )
return n
2023-01-30 16:27:06 +03:00
}
2023-02-06 17:17:46 +03:00
2024-03-18 10:50:42 +03:00
// QueueDrop removes matching messages from the queue.
func ( Admin ) QueueDrop ( ctx context . Context , filter queue . Filter ) ( affected int ) {
log := pkglog . WithContext ( ctx )
n , err := queue . Drop ( ctx , log , filter )
xcheckf ( ctx , err , "drop messages from queue" )
return n
}
// QueueRequireTLSSet updates the requiretls field for matching messages in the
// queue, to be used for the next delivery.
func ( Admin ) QueueRequireTLSSet ( ctx context . Context , filter queue . Filter , requireTLS * bool ) ( affected int ) {
n , err := queue . RequireTLSSet ( ctx , filter , requireTLS )
xcheckf ( ctx , err , "update requiretls for messages in queue" )
return n
}
// QueueTransportSet initiates delivery of a message from the queue and sets the transport
// to use for delivery.
func ( Admin ) QueueTransportSet ( ctx context . Context , filter queue . Filter , transport string ) ( affected int ) {
n , err := queue . TransportSet ( ctx , filter , transport )
xcheckf ( ctx , err , "changing transport for messages in queue" )
return n
implement "requiretls", rfc 8689
with requiretls, the tls verification mode/rules for email deliveries can be
changed by the sender/submitter. in two ways:
1. "requiretls" smtp extension to always enforce verified tls (with mta-sts or
dnssec+dane), along the entire delivery path until delivery into the final
destination mailbox (so entire transport is verified-tls-protected).
2. "tls-required: no" message header, to ignore any tls and tls verification
errors even if the recipient domain has a policy that requires tls verification
(mta-sts and/or dnssec+dane), allowing delivery of non-sensitive messages in
case of misconfiguration/interoperability issues (at least useful for sending
tls reports).
we enable requiretls by default (only when tls is active), for smtp and
submission. it can be disabled through the config.
for each delivery attempt, we now store (per recipient domain, in the account
of the sender) whether the smtp server supports starttls and requiretls. this
support is shown (after having sent a first message) in the webmail when
sending a message (the previous 3 bars under the address input field are now 5
bars, the first for starttls support, the last for requiretls support). when
all recipient domains for a message are known to implement requiretls,
requiretls is automatically selected for sending (instead of "default" tls
behaviour). users can also select the "fallback to insecure" to add the
"tls-required: no" header.
new metrics are added for insight into requiretls errors and (some, not yet
all) cases where tls-required-no ignored a tls/verification error.
the admin can change the requiretls status for messages in the queue. so with
default delivery attempts, when verified tls is required by failing, an admin
could potentially change the field to "tls-required: no"-behaviour.
messages received (over smtp) with the requiretls option, get a comment added
to their Received header line, just before "id", after "with".
2023-10-24 11:06:16 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// RetiredList returns messages retired from the queue (delivery could
// have succeeded or failed).
func ( Admin ) RetiredList ( ctx context . Context , filter queue . RetiredFilter , sort queue . RetiredSort ) [ ] queue . MsgRetired {
l , err := queue . RetiredList ( ctx , filter , sort )
xcheckf ( ctx , err , "listing retired messages" )
return l
}
// HookQueueSize returns the number of webhooks still to be delivered.
func ( Admin ) HookQueueSize ( ctx context . Context ) int {
n , err := queue . HookQueueSize ( ctx )
xcheckf ( ctx , err , "get hook queue size" )
return n
}
// HookList lists webhooks still to be delivered.
func ( Admin ) HookList ( ctx context . Context , filter queue . HookFilter , sort queue . HookSort ) [ ] queue . Hook {
l , err := queue . HookList ( ctx , filter , sort )
xcheckf ( ctx , err , "listing hook queue" )
return l
}
// HookNextAttemptSet sets a new time for next delivery attempt of matching
// hooks from the queue.
func ( Admin ) HookNextAttemptSet ( ctx context . Context , filter queue . HookFilter , minutes int ) ( affected int ) {
n , err := queue . HookNextAttemptSet ( ctx , filter , time . Now ( ) . Add ( time . Duration ( minutes ) * time . Minute ) )
xcheckf ( ctx , err , "setting new next delivery attempt time for matching webhooks in queue" )
return n
}
// HookNextAttemptAdd adds a duration to the time of next delivery attempt of
// matching hooks from the queue.
func ( Admin ) HookNextAttemptAdd ( ctx context . Context , filter queue . HookFilter , minutes int ) ( affected int ) {
n , err := queue . HookNextAttemptAdd ( ctx , filter , time . Duration ( minutes ) * time . Minute )
xcheckf ( ctx , err , "adding duration to next delivery attempt for matching webhooks in queue" )
return n
}
// HookRetiredList lists retired webhooks.
func ( Admin ) HookRetiredList ( ctx context . Context , filter queue . HookRetiredFilter , sort queue . HookRetiredSort ) [ ] queue . HookRetired {
l , err := queue . HookRetiredList ( ctx , filter , sort )
xcheckf ( ctx , err , "listing retired hooks" )
return l
}
// HookCancel prevents further delivery attempts of matching webhooks.
func ( Admin ) HookCancel ( ctx context . Context , filter queue . HookFilter ) ( affected int ) {
log := pkglog . WithContext ( ctx )
n , err := queue . HookCancel ( ctx , log , filter )
xcheckf ( ctx , err , "cancel hooks in queue" )
return n
}
2023-02-06 17:17:46 +03:00
// LogLevels returns the current log levels.
func ( Admin ) LogLevels ( ctx context . Context ) map [ string ] string {
m := map [ string ] string { }
for pkg , level := range mox . Conf . LogLevels ( ) {
2023-12-05 15:35:58 +03:00
s , ok := mlog . LevelStrings [ level ]
if ! ok {
s = level . String ( )
}
m [ pkg ] = s
2023-02-06 17:17:46 +03:00
}
return m
}
// LogLevelSet sets a log level for a package.
func ( Admin ) LogLevelSet ( ctx context . Context , pkg string , levelStr string ) {
level , ok := mlog . Levels [ levelStr ]
if ! ok {
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , errors . New ( "unknown" ) , "lookup level" )
2023-02-06 17:17:46 +03:00
}
2023-12-05 15:35:58 +03:00
mox . Conf . LogLevelSet ( pkglog . WithContext ( ctx ) , pkg , level )
2023-02-06 17:17:46 +03:00
}
// LogLevelRemove removes a log level for a package, which cannot be the empty string.
func ( Admin ) LogLevelRemove ( ctx context . Context , pkg string ) {
2023-12-05 15:35:58 +03:00
mox . Conf . LogLevelRemove ( pkglog . WithContext ( ctx ) , pkg )
2023-02-06 17:17:46 +03:00
}
2023-02-27 17:03:37 +03:00
// CheckUpdatesEnabled returns whether checking for updates is enabled.
func ( Admin ) CheckUpdatesEnabled ( ctx context . Context ) bool {
return mox . Conf . Static . CheckUpdates
}
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
// WebserverConfig is the combination of WebDomainRedirects and WebHandlers
// from the domains.conf configuration file.
type WebserverConfig struct {
WebDNSDomainRedirects [ ] [ 2 ] dns . Domain // From server to frontend.
WebDomainRedirects [ ] [ 2 ] string // From frontend to server, it's not convenient to create dns.Domain in the frontend.
WebHandlers [ ] config . WebHandler
}
// WebserverConfig returns the current webserver config
func ( Admin ) WebserverConfig ( ctx context . Context ) ( conf WebserverConfig ) {
conf = webserverConfig ( )
conf . WebDomainRedirects = nil
return conf
}
func webserverConfig ( ) WebserverConfig {
r , l := mox . Conf . WebServer ( )
x := make ( [ ] [ 2 ] dns . Domain , 0 , len ( r ) )
xs := make ( [ ] [ 2 ] string , 0 , len ( r ) )
for k , v := range r {
x = append ( x , [ 2 ] dns . Domain { k , v } )
xs = append ( xs , [ 2 ] string { k . Name ( ) , v . Name ( ) } )
}
sort . Slice ( x , func ( i , j int ) bool {
return x [ i ] [ 0 ] . ASCII < x [ j ] [ 0 ] . ASCII
} )
sort . Slice ( xs , func ( i , j int ) bool {
return xs [ i ] [ 0 ] < xs [ j ] [ 0 ]
} )
return WebserverConfig { x , xs , l }
}
// WebserverConfigSave saves a new webserver config. If oldConf is not equal to
// the current config, an error is returned.
func ( Admin ) WebserverConfigSave ( ctx context . Context , oldConf , newConf WebserverConfig ) ( savedConf WebserverConfig ) {
current := webserverConfig ( )
webhandlersEqual := func ( ) bool {
if len ( current . WebHandlers ) != len ( oldConf . WebHandlers ) {
return false
}
for i , wh := range current . WebHandlers {
if ! wh . Equal ( oldConf . WebHandlers [ i ] ) {
return false
}
}
return true
}
if ! reflect . DeepEqual ( oldConf . WebDNSDomainRedirects , current . WebDNSDomainRedirects ) || ! webhandlersEqual ( ) {
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , errors . New ( "config has changed" ) , "comparing old/current config" )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
}
// Convert to map, check that there are no duplicates here. The canonicalized
// dns.Domain are checked again for uniqueness when parsing the config before
// storing.
domainRedirects := map [ string ] string { }
for _ , x := range newConf . WebDomainRedirects {
if _ , ok := domainRedirects [ x [ 0 ] ] ; ok {
2023-08-09 09:02:58 +03:00
xcheckuserf ( ctx , errors . New ( "already present" ) , "checking redirect %s" , x [ 0 ] )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
}
domainRedirects [ x [ 0 ] ] = x [ 1 ]
}
err := mox . WebserverConfigSet ( ctx , domainRedirects , newConf . WebHandlers )
xcheckf ( ctx , err , "saving webserver config" )
savedConf = webserverConfig ( )
savedConf . WebDomainRedirects = nil
return savedConf
}
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
// Transports returns the configured transports, for sending email.
func ( Admin ) Transports ( ctx context . Context ) map [ string ] config . Transport {
return mox . Conf . Static . Transports
}
2023-11-01 19:55:40 +03:00
// DMARCEvaluationStats returns a map of all domains with evaluations to a count of
// the evaluations and whether those evaluations will cause a report to be sent.
func ( Admin ) DMARCEvaluationStats ( ctx context . Context ) map [ string ] dmarcdb . EvaluationStat {
stats , err := dmarcdb . EvaluationStats ( ctx )
xcheckf ( ctx , err , "get evaluation stats" )
return stats
}
// DMARCEvaluationsDomain returns all evaluations for aggregate reports for the
// domain, sorted from oldest to most recent.
func ( Admin ) DMARCEvaluationsDomain ( ctx context . Context , domain string ) ( dns . Domain , [ ] dmarcdb . Evaluation ) {
dom , err := dns . ParseDomain ( domain )
xcheckf ( ctx , err , "parsing domain" )
evals , err := dmarcdb . EvaluationsDomain ( ctx , dom )
xcheckf ( ctx , err , "get evaluations for domain" )
return dom , evals
}
// DMARCRemoveEvaluations removes evaluations for a domain.
func ( Admin ) DMARCRemoveEvaluations ( ctx context . Context , domain string ) {
dom , err := dns . ParseDomain ( domain )
xcheckf ( ctx , err , "parsing domain" )
err = dmarcdb . RemoveEvaluationsDomain ( ctx , dom )
xcheckf ( ctx , err , "removing evaluations for domain" )
}
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
2023-11-13 15:48:52 +03:00
// DMARCSuppressAdd adds a reporting address to the suppress list. Outgoing
// reports will be suppressed for a period.
func ( Admin ) DMARCSuppressAdd ( ctx context . Context , reportingAddress string , until time . Time , comment string ) {
addr , err := smtp . ParseAddress ( reportingAddress )
xcheckuserf ( ctx , err , "parsing reporting address" )
ba := dmarcdb . SuppressAddress { ReportingAddress : addr . String ( ) , Until : until , Comment : comment }
err = dmarcdb . SuppressAdd ( ctx , & ba )
xcheckf ( ctx , err , "adding address to suppresslist" )
}
// DMARCSuppressList returns all reporting addresses on the suppress list.
func ( Admin ) DMARCSuppressList ( ctx context . Context ) [ ] dmarcdb . SuppressAddress {
l , err := dmarcdb . SuppressList ( ctx )
xcheckf ( ctx , err , "listing reporting addresses in suppresslist" )
return l
}
// DMARCSuppressRemove removes a reporting address record from the suppress list.
func ( Admin ) DMARCSuppressRemove ( ctx context . Context , id int64 ) {
err := dmarcdb . SuppressRemove ( ctx , id )
xcheckf ( ctx , err , "removing reporting address from suppresslist" )
}
// DMARCSuppressExtend updates the until field of a suppressed reporting address record.
func ( Admin ) DMARCSuppressExtend ( ctx context . Context , id int64 , until time . Time ) {
err := dmarcdb . SuppressUpdate ( ctx , id , until )
xcheckf ( ctx , err , "updating reporting address in suppresslist" )
}
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
// TLSRPTResults returns all TLSRPT results in the database.
func ( Admin ) TLSRPTResults ( ctx context . Context ) [ ] tlsrptdb . TLSResult {
results , err := tlsrptdb . Results ( ctx )
xcheckf ( ctx , err , "get results" )
return results
}
// TLSRPTResultsPolicyDomain returns the TLS results for a domain.
2023-11-20 13:31:46 +03:00
func ( Admin ) TLSRPTResultsDomain ( ctx context . Context , isRcptDom bool , policyDomain string ) ( dns . Domain , [ ] tlsrptdb . TLSResult ) {
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
dom , err := dns . ParseDomain ( policyDomain )
xcheckf ( ctx , err , "parsing domain" )
2023-11-20 13:31:46 +03:00
if isRcptDom {
results , err := tlsrptdb . ResultsRecipientDomain ( ctx , dom )
xcheckf ( ctx , err , "get result for recipient domain" )
return dom , results
}
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
results , err := tlsrptdb . ResultsPolicyDomain ( ctx , dom )
xcheckf ( ctx , err , "get result for policy domain" )
return dom , results
}
// LookupTLSRPTRecord looks up a TLSRPT record and returns the parsed form, original txt
// form from DNS, and error with the TLSRPT record as a string.
func ( Admin ) LookupTLSRPTRecord ( ctx context . Context , domain string ) ( record * TLSRPTRecord , txt string , errstr string ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
dom , err := dns . ParseDomain ( domain )
xcheckf ( ctx , err , "parsing domain" )
2023-12-05 15:35:58 +03:00
resolver := dns . StrictResolver { Pkg : "webadmin" , Log : log . Logger }
r , txt , err := tlsrpt . Lookup ( ctx , log . Logger , resolver , dom )
2023-11-12 13:53:39 +03:00
if err != nil && ( errors . Is ( err , tlsrpt . ErrNoRecord ) || errors . Is ( err , tlsrpt . ErrMultipleRecords ) || errors . Is ( err , tlsrpt . ErrRecordSyntax ) || errors . Is ( err , tlsrpt . ErrDNS ) ) {
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
errstr = err . Error ( )
err = nil
}
xcheckf ( ctx , err , "fetching tlsrpt record" )
if r != nil {
record = & TLSRPTRecord { Record : * r }
}
return record , txt , errstr
}
// TLSRPTRemoveResults removes the TLS results for a domain for the given day. If
// day is empty, all results are removed.
2023-11-20 13:31:46 +03:00
func ( Admin ) TLSRPTRemoveResults ( ctx context . Context , isRcptDom bool , domain string , day string ) {
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
dom , err := dns . ParseDomain ( domain )
xcheckf ( ctx , err , "parsing domain" )
2023-11-20 13:31:46 +03:00
if isRcptDom {
err = tlsrptdb . RemoveResultsRecipientDomain ( ctx , dom , day )
xcheckf ( ctx , err , "removing tls results" )
} else {
err = tlsrptdb . RemoveResultsPolicyDomain ( ctx , dom , day )
xcheckf ( ctx , err , "removing tls results" )
}
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
}
2023-11-13 15:48:52 +03:00
// TLSRPTSuppressAdd adds a reporting address to the suppress list. Outgoing
// reports will be suppressed for a period.
func ( Admin ) TLSRPTSuppressAdd ( ctx context . Context , reportingAddress string , until time . Time , comment string ) {
addr , err := smtp . ParseAddress ( reportingAddress )
xcheckuserf ( ctx , err , "parsing reporting address" )
ba := tlsrptdb . TLSRPTSuppressAddress { ReportingAddress : addr . String ( ) , Until : until , Comment : comment }
err = tlsrptdb . SuppressAdd ( ctx , & ba )
xcheckf ( ctx , err , "adding address to suppresslist" )
}
// TLSRPTSuppressList returns all reporting addresses on the suppress list.
func ( Admin ) TLSRPTSuppressList ( ctx context . Context ) [ ] tlsrptdb . TLSRPTSuppressAddress {
l , err := tlsrptdb . SuppressList ( ctx )
xcheckf ( ctx , err , "listing reporting addresses in suppresslist" )
return l
}
// TLSRPTSuppressRemove removes a reporting address record from the suppress list.
func ( Admin ) TLSRPTSuppressRemove ( ctx context . Context , id int64 ) {
err := tlsrptdb . SuppressRemove ( ctx , id )
xcheckf ( ctx , err , "removing reporting address from suppresslist" )
}
// TLSRPTSuppressExtend updates the until field of a suppressed reporting address record.
func ( Admin ) TLSRPTSuppressExtend ( ctx context . Context , id int64 , until time . Time ) {
err := tlsrptdb . SuppressUpdate ( ctx , id , until )
xcheckf ( ctx , err , "updating reporting address in suppresslist" )
}
2024-03-05 12:50:56 +03:00
// LookupCid turns an ID from a Received header into a cid as used in logging.
func ( Admin ) LookupCid ( ctx context . Context , recvID string ) ( cid string ) {
v , err := mox . ReceivedToCid ( recvID )
xcheckf ( ctx , err , "received id to cid" )
return fmt . Sprintf ( "%x" , v )
}