2023-01-30 16:27:06 +03:00
package mox
import (
"bytes"
"context"
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
"crypto"
2023-01-30 16:27:06 +03:00
"crypto/ed25519"
cryptorand "crypto/rand"
"crypto/rsa"
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
"crypto/sha256"
2023-01-30 16:27:06 +03:00
"crypto/x509"
"encoding/pem"
2024-04-19 11:23:53 +03:00
"errors"
2023-01-30 16:27:06 +03:00
"fmt"
2024-02-08 16:49:01 +03:00
"log/slog"
2023-01-30 16:27:06 +03:00
"net"
2023-08-23 15:27:21 +03:00
"net/url"
2023-01-30 16:27:06 +03:00
"os"
"path/filepath"
2024-04-24 20:15:30 +03:00
"slices"
2023-01-30 16:27:06 +03:00
"sort"
2023-03-29 11:55:05 +03:00
"strings"
2023-01-30 16:27:06 +03:00
"time"
2023-09-23 13:05:40 +03:00
"golang.org/x/exp/maps"
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
"github.com/mjl-/adns"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dkim"
2023-08-23 15:27:21 +03:00
"github.com/mjl-/mox/dmarc"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/junk"
2024-04-19 11:23:53 +03:00
"github.com/mjl-/mox/mlog"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/mtasts"
"github.com/mjl-/mox/smtp"
2023-08-23 15:27:21 +03:00
"github.com/mjl-/mox/tlsrpt"
2023-01-30 16:27:06 +03:00
)
2024-04-19 11:23:53 +03:00
var ErrRequest = errors . New ( "bad request" )
2023-11-22 16:02:24 +03:00
// TXTStrings returns a TXT record value as one or more quoted strings, each max
// 100 characters. In case of multiple strings, a multi-line record is returned.
2023-01-30 16:27:06 +03:00
func TXTStrings ( s string ) string {
2023-11-22 16:02:24 +03:00
if len ( s ) <= 100 {
2023-10-13 10:14:42 +03:00
return ` " ` + s + ` " `
}
r := "(\n"
2023-01-30 16:27:06 +03:00
for len ( s ) > 0 {
n := len ( s )
2023-11-22 16:02:24 +03:00
if n > 100 {
n = 100
2023-01-30 16:27:06 +03:00
}
if r != "" {
r += " "
}
2023-10-13 10:14:42 +03:00
r += "\t\t\"" + s [ : n ] + "\"\n"
2023-01-30 16:27:06 +03:00
s = s [ n : ]
}
2023-10-13 10:14:42 +03:00
r += "\t)"
2023-01-30 16:27:06 +03:00
return r
}
// MakeDKIMEd25519Key returns a PEM buffer containing an ed25519 key for use
// with DKIM.
// selector and domain can be empty. If not, they are used in the note.
func MakeDKIMEd25519Key ( selector , domain dns . Domain ) ( [ ] byte , error ) {
_ , privKey , err := ed25519 . GenerateKey ( cryptorand . Reader )
if err != nil {
return nil , fmt . Errorf ( "generating key: %w" , err )
}
pkcs8 , err := x509 . MarshalPKCS8PrivateKey ( privKey )
if err != nil {
return nil , fmt . Errorf ( "marshal key: %w" , err )
}
block := & pem . Block {
Type : "PRIVATE KEY" ,
Headers : map [ string ] string {
"Note" : dkimKeyNote ( "ed25519" , selector , domain ) ,
} ,
Bytes : pkcs8 ,
}
b := & bytes . Buffer { }
if err := pem . Encode ( b , block ) ; err != nil {
return nil , fmt . Errorf ( "encoding pem: %w" , err )
}
return b . Bytes ( ) , nil
}
func dkimKeyNote ( kind string , selector , domain dns . Domain ) string {
s := kind + " dkim private key"
var zero dns . Domain
if selector != zero && domain != zero {
s += fmt . Sprintf ( " for %s._domainkey.%s" , selector . ASCII , domain . ASCII )
}
s += fmt . Sprintf ( ", generated by mox on %s" , time . Now ( ) . Format ( time . RFC3339 ) )
return s
}
2024-04-24 12:35:07 +03:00
// MakeDKIMRSAKey returns a PEM buffer containing an rsa key for use with
2023-01-30 16:27:06 +03:00
// DKIM.
// selector and domain can be empty. If not, they are used in the note.
func MakeDKIMRSAKey ( selector , domain dns . Domain ) ( [ ] byte , error ) {
// 2048 bits seems reasonable in 2022, 1024 is on the low side, larger
// keys may not fit in UDP DNS response.
privKey , err := rsa . GenerateKey ( cryptorand . Reader , 2048 )
if err != nil {
return nil , fmt . Errorf ( "generating key: %w" , err )
}
pkcs8 , err := x509 . MarshalPKCS8PrivateKey ( privKey )
if err != nil {
return nil , fmt . Errorf ( "marshal key: %w" , err )
}
block := & pem . Block {
Type : "PRIVATE KEY" ,
Headers : map [ string ] string {
2023-10-13 09:59:35 +03:00
"Note" : dkimKeyNote ( "rsa-2048" , selector , domain ) ,
2023-01-30 16:27:06 +03:00
} ,
Bytes : pkcs8 ,
}
b := & bytes . Buffer { }
if err := pem . Encode ( b , block ) ; err != nil {
return nil , fmt . Errorf ( "encoding pem: %w" , err )
}
return b . Bytes ( ) , nil
}
// MakeAccountConfig returns a new account configuration for an email address.
func MakeAccountConfig ( addr smtp . Address ) config . Account {
account := config . Account {
Domain : addr . Domain . Name ( ) ,
Destinations : map [ string ] config . Destination {
2023-03-10 00:07:37 +03:00
addr . String ( ) : { } ,
2023-01-30 16:27:06 +03:00
} ,
RejectsMailbox : "Rejects" ,
JunkFilter : & config . JunkFilter {
Threshold : 0.95 ,
Params : junk . Params {
Onegrams : true ,
MaxPower : .01 ,
TopWords : 10 ,
IgnoreWords : .1 ,
RareWords : 2 ,
} ,
} ,
}
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
account . AutomaticJunkFlags . Enabled = true
2023-02-13 12:47:20 +03:00
account . AutomaticJunkFlags . JunkMailboxRegexp = "^(junk|spam)"
account . AutomaticJunkFlags . NeutralMailboxRegexp = "^(inbox|neutral|postmaster|dmarc|tlsrpt|rejects)"
2023-01-30 16:27:06 +03:00
account . SubjectPass . Period = 12 * time . Hour
return account
}
2024-04-19 11:23:53 +03:00
func writeFile ( log mlog . Log , path string , data [ ] byte ) error {
os . MkdirAll ( filepath . Dir ( path ) , 0770 )
f , err := os . OpenFile ( path , os . O_WRONLY | os . O_CREATE | os . O_EXCL , 0660 )
if err != nil {
return fmt . Errorf ( "creating file %s: %s" , path , err )
}
defer func ( ) {
if f != nil {
err := f . Close ( )
log . Check ( err , "closing file after error" )
err = os . Remove ( path )
log . Check ( err , "removing file after error" , slog . String ( "path" , path ) )
}
} ( )
if _ , err := f . Write ( data ) ; err != nil {
return fmt . Errorf ( "writing file %s: %s" , path , err )
}
if err := f . Close ( ) ; err != nil {
return fmt . Errorf ( "close file: %v" , err )
}
f = nil
return nil
}
2023-01-30 16:27:06 +03:00
// MakeDomainConfig makes a new config for a domain, creating DKIM keys, using
// accountName for DMARC and TLS reports.
change mox to start as root, bind to network sockets, then drop to regular unprivileged mox user
makes it easier to run on bsd's, where you cannot (easily?) let non-root users
bind to ports <1024. starting as root also paves the way for future improvements
with privilege separation.
unfortunately, this requires changes to how you start mox. though mox will help
by automatically fix up dir/file permissions/ownership.
if you start mox from the systemd unit file, you should update it so it starts
as root and adds a few additional capabilities:
# first update the mox binary, then, as root:
./mox config printservice >mox.service
systemctl daemon-reload
systemctl restart mox
journalctl -f -u mox &
# you should see mox start up, with messages about fixing permissions on dirs/files.
if you used the recommended config/ and data/ directory, in a directory just for
mox, and with the mox user called "mox", this should be enough.
if you don't want mox to modify dir/file permissions, set "NoFixPermissions:
true" in mox.conf.
if you named the mox user something else than mox, e.g. "_mox", add "User: _mox"
to mox.conf.
if you created a shared service user as originally suggested, you may want to
get rid of that as it is no longer useful and may get in the way. e.g. if you
had /home/service/mox with a "service" user, that service user can no longer
access any files: only mox and root can.
this also adds scripts for building mox docker images for alpine-supported
platforms.
the "restart" subcommand has been removed. it wasn't all that useful and got in
the way.
and another change: when adding a domain while mtasts isn't enabled, don't add
the per-domain mtasts config, as it would cause failure to add the domain.
based on report from setting up mox on openbsd from mteege.
and based on issue #3. thanks for the feedback!
2023-02-27 14:19:55 +03:00
func MakeDomainConfig ( ctx context . Context , domain , hostname dns . Domain , accountName string , withMTASTS bool ) ( config . Domain , [ ] string , error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
now := time . Now ( )
year := now . Format ( "2006" )
timestamp := now . Format ( "20060102T150405" )
var paths [ ] string
defer func ( ) {
for _ , p := range paths {
2023-02-16 15:22:00 +03:00
err := os . Remove ( p )
2023-12-05 15:35:58 +03:00
log . Check ( err , "removing path for domain config" , slog . String ( "path" , p ) )
2023-01-30 16:27:06 +03:00
}
} ( )
confDKIM := config . DKIM {
Selectors : map [ string ] config . Selector { } ,
}
addSelector := func ( kind , name string , privKey [ ] byte ) error {
record := fmt . Sprintf ( "%s._domainkey.%s" , name , domain . ASCII )
2023-10-13 09:59:35 +03:00
keyPath := filepath . Join ( "dkim" , fmt . Sprintf ( "%s.%s.%s.privatekey.pkcs8.pem" , record , timestamp , kind ) )
2023-07-02 14:53:34 +03:00
p := configDirPath ( ConfigDynamicPath , keyPath )
2024-04-19 11:23:53 +03:00
if err := writeFile ( log , p , privKey ) ; err != nil {
2023-01-30 16:27:06 +03:00
return err
}
paths = append ( paths , p )
confDKIM . Selectors [ name ] = config . Selector {
// Example from RFC has 5 day between signing and expiration. ../rfc/6376:1393
// Expiration is not intended as antireplay defense, but it may help. ../rfc/6376:1340
// Messages in the wild have been observed with 2 hours and 1 year expiration.
Expiration : "72h" ,
PrivateKeyFile : keyPath ,
}
return nil
}
addEd25519 := func ( name string ) error {
key , err := MakeDKIMEd25519Key ( dns . Domain { ASCII : name } , domain )
if err != nil {
return fmt . Errorf ( "making dkim ed25519 private key: %s" , err )
}
return addSelector ( "ed25519" , name , key )
}
addRSA := func ( name string ) error {
key , err := MakeDKIMRSAKey ( dns . Domain { ASCII : name } , domain )
if err != nil {
return fmt . Errorf ( "making dkim rsa private key: %s" , err )
}
2023-10-13 09:59:35 +03:00
return addSelector ( "rsa2048" , name , key )
2023-01-30 16:27:06 +03:00
}
if err := addEd25519 ( year + "a" ) ; err != nil {
return config . Domain { } , nil , err
}
if err := addRSA ( year + "b" ) ; err != nil {
return config . Domain { } , nil , err
}
if err := addEd25519 ( year + "c" ) ; err != nil {
return config . Domain { } , nil , err
}
if err := addRSA ( year + "d" ) ; err != nil {
return config . Domain { } , nil , err
}
// We sign with the first two. In case they are misused, the switch to the other
// keys is easy, just change the config. Operators should make the public key field
// of the misused keys empty in the DNS records to disable the misused keys.
confDKIM . Sign = [ ] string { year + "a" , year + "b" }
confDomain := config . Domain {
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
ClientSettingsDomain : "mail." + domain . Name ( ) ,
2023-01-30 16:27:06 +03:00
LocalpartCatchallSeparator : "+" ,
DKIM : confDKIM ,
DMARC : & config . DMARC {
Account : accountName ,
Localpart : "dmarc-reports" ,
Mailbox : "DMARC" ,
} ,
TLSRPT : & config . TLSRPT {
Account : accountName ,
Localpart : "tls-reports" ,
Mailbox : "TLSRPT" ,
} ,
}
change mox to start as root, bind to network sockets, then drop to regular unprivileged mox user
makes it easier to run on bsd's, where you cannot (easily?) let non-root users
bind to ports <1024. starting as root also paves the way for future improvements
with privilege separation.
unfortunately, this requires changes to how you start mox. though mox will help
by automatically fix up dir/file permissions/ownership.
if you start mox from the systemd unit file, you should update it so it starts
as root and adds a few additional capabilities:
# first update the mox binary, then, as root:
./mox config printservice >mox.service
systemctl daemon-reload
systemctl restart mox
journalctl -f -u mox &
# you should see mox start up, with messages about fixing permissions on dirs/files.
if you used the recommended config/ and data/ directory, in a directory just for
mox, and with the mox user called "mox", this should be enough.
if you don't want mox to modify dir/file permissions, set "NoFixPermissions:
true" in mox.conf.
if you named the mox user something else than mox, e.g. "_mox", add "User: _mox"
to mox.conf.
if you created a shared service user as originally suggested, you may want to
get rid of that as it is no longer useful and may get in the way. e.g. if you
had /home/service/mox with a "service" user, that service user can no longer
access any files: only mox and root can.
this also adds scripts for building mox docker images for alpine-supported
platforms.
the "restart" subcommand has been removed. it wasn't all that useful and got in
the way.
and another change: when adding a domain while mtasts isn't enabled, don't add
the per-domain mtasts config, as it would cause failure to add the domain.
based on report from setting up mox on openbsd from mteege.
and based on issue #3. thanks for the feedback!
2023-02-27 14:19:55 +03:00
if withMTASTS {
confDomain . MTASTS = & config . MTASTS {
PolicyID : time . Now ( ) . UTC ( ) . Format ( "20060102T150405" ) ,
Mode : mtasts . ModeEnforce ,
// We start out with 24 hour, and warn in the admin interface that users should
// increase it to weeks once the setup works.
MaxAge : 24 * time . Hour ,
MX : [ ] string { hostname . ASCII } ,
}
}
2023-01-30 16:27:06 +03:00
rpaths := paths
paths = nil
return confDomain , rpaths , nil
}
2024-04-19 11:23:53 +03:00
// DKIMAdd adds a DKIM selector for a domain, generating a key and writing it to disk.
func DKIMAdd ( ctx context . Context , domain , selector dns . Domain , algorithm , hash string , headerRelaxed , bodyRelaxed , seal bool , headers [ ] string , lifetime time . Duration ) ( rerr error ) {
log := pkglog . WithContext ( ctx )
defer func ( ) {
if rerr != nil {
log . Errorx ( "adding dkim key" , rerr ,
slog . Any ( "domain" , domain ) ,
slog . Any ( "selector" , selector ) )
}
} ( )
switch hash {
case "sha256" , "sha1" :
default :
return fmt . Errorf ( "%w: unknown hash algorithm %q" , ErrRequest , hash )
}
var privKey [ ] byte
var err error
var kind string
switch algorithm {
case "rsa" :
privKey , err = MakeDKIMRSAKey ( selector , domain )
kind = "rsa2048"
case "ed25519" :
privKey , err = MakeDKIMEd25519Key ( selector , domain )
kind = "ed25519"
default :
err = fmt . Errorf ( "unknown algorithm" )
}
if err != nil {
return fmt . Errorf ( "%w: making dkim key: %v" , ErrRequest , err )
}
// Only take lock now, we don't want to hold it while generating a key.
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
c := Conf . Dynamic
d , ok := c . Domains [ domain . Name ( ) ]
if ! ok {
return fmt . Errorf ( "%w: domain does not exist" , ErrRequest )
}
if _ , ok := d . DKIM . Selectors [ selector . Name ( ) ] ; ok {
return fmt . Errorf ( "%w: selector already exists for domain" , ErrRequest )
}
record := fmt . Sprintf ( "%s._domainkey.%s" , selector . ASCII , domain . ASCII )
timestamp := time . Now ( ) . Format ( "20060102T150405" )
keyPath := filepath . Join ( "dkim" , fmt . Sprintf ( "%s.%s.%s.privatekey.pkcs8.pem" , record , timestamp , kind ) )
p := configDirPath ( ConfigDynamicPath , keyPath )
if err := writeFile ( log , p , privKey ) ; err != nil {
return fmt . Errorf ( "writing key file: %v" , err )
}
removePath := p
defer func ( ) {
if removePath != "" {
err := os . Remove ( removePath )
log . Check ( err , "removing path for dkim key" , slog . String ( "path" , removePath ) )
}
} ( )
nsel := config . Selector {
Hash : hash ,
Canonicalization : config . Canonicalization {
HeaderRelaxed : headerRelaxed ,
BodyRelaxed : bodyRelaxed ,
} ,
Headers : headers ,
DontSealHeaders : ! seal ,
Expiration : lifetime . String ( ) ,
PrivateKeyFile : keyPath ,
}
// All good, time to update the config.
nd := d
nd . DKIM . Selectors = map [ string ] config . Selector { }
for name , osel := range d . DKIM . Selectors {
nd . DKIM . Selectors [ name ] = osel
}
nd . DKIM . Selectors [ selector . Name ( ) ] = nsel
nc := c
nc . Domains = map [ string ] config . Domain { }
for name , dom := range c . Domains {
nc . Domains [ name ] = dom
}
nc . Domains [ domain . Name ( ) ] = nd
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
return fmt . Errorf ( "writing domains.conf: %w" , err )
}
log . Info ( "dkim key added" , slog . Any ( "domain" , domain ) , slog . Any ( "selector" , selector ) )
removePath = "" // Prevent cleanup of key file.
return nil
}
// DKIMRemove removes the selector from the domain, moving the key file out of the way.
func DKIMRemove ( ctx context . Context , domain , selector dns . Domain ) ( rerr error ) {
log := pkglog . WithContext ( ctx )
defer func ( ) {
if rerr != nil {
log . Errorx ( "removing dkim key" , rerr ,
slog . Any ( "domain" , domain ) ,
slog . Any ( "selector" , selector ) )
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
c := Conf . Dynamic
d , ok := c . Domains [ domain . Name ( ) ]
if ! ok {
return fmt . Errorf ( "%w: domain does not exist" , ErrRequest )
}
sel , ok := d . DKIM . Selectors [ selector . Name ( ) ]
if ! ok {
return fmt . Errorf ( "%w: selector does not exist for domain" , ErrRequest )
}
nsels := map [ string ] config . Selector { }
for name , sel := range d . DKIM . Selectors {
if name != selector . Name ( ) {
nsels [ name ] = sel
}
}
nsign := make ( [ ] string , 0 , len ( d . DKIM . Sign ) )
for _ , name := range d . DKIM . Sign {
if name != selector . Name ( ) {
nsign = append ( nsign , name )
}
}
nd := d
nd . DKIM = config . DKIM { Selectors : nsels , Sign : nsign }
nc := c
nc . Domains = map [ string ] config . Domain { }
for name , dom := range c . Domains {
nc . Domains [ name ] = dom
}
nc . Domains [ domain . Name ( ) ] = nd
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
return fmt . Errorf ( "writing domains.conf: %w" , err )
}
// Move away a DKIM private key to a subdirectory "old". But only if
// not in use by other domains.
usedKeyPaths := gatherUsedKeysPaths ( nc )
moveAwayKeys ( log , map [ string ] config . Selector { selector . Name ( ) : sel } , usedKeyPaths )
log . Info ( "dkim key removed" , slog . Any ( "domain" , domain ) , slog . Any ( "selector" , selector ) )
return nil
}
2023-01-30 16:27:06 +03:00
// DomainAdd adds the domain to the domains config, rewriting domains.conf and
// marking it loaded.
//
2023-04-24 13:04:46 +03:00
// accountName is used for DMARC/TLS report and potentially for the postmaster address.
2023-01-30 16:27:06 +03:00
// If the account does not exist, it is created with localpart. Localpart must be
// set only if the account does not yet exist.
func DomainAdd ( ctx context . Context , domain dns . Domain , accountName string , localpart smtp . Localpart ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 18:06:50 +03:00
log . Errorx ( "adding domain" , rerr ,
slog . Any ( "domain" , domain ) ,
slog . String ( "account" , accountName ) ,
slog . Any ( "localpart" , localpart ) )
2023-01-30 16:27:06 +03:00
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
c := Conf . Dynamic
if _ , ok := c . Domains [ domain . Name ( ) ] ; ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: domain already present" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Domains = map [ string ] config . Domain { }
for name , d := range c . Domains {
nc . Domains [ name ] = d
}
change mox to start as root, bind to network sockets, then drop to regular unprivileged mox user
makes it easier to run on bsd's, where you cannot (easily?) let non-root users
bind to ports <1024. starting as root also paves the way for future improvements
with privilege separation.
unfortunately, this requires changes to how you start mox. though mox will help
by automatically fix up dir/file permissions/ownership.
if you start mox from the systemd unit file, you should update it so it starts
as root and adds a few additional capabilities:
# first update the mox binary, then, as root:
./mox config printservice >mox.service
systemctl daemon-reload
systemctl restart mox
journalctl -f -u mox &
# you should see mox start up, with messages about fixing permissions on dirs/files.
if you used the recommended config/ and data/ directory, in a directory just for
mox, and with the mox user called "mox", this should be enough.
if you don't want mox to modify dir/file permissions, set "NoFixPermissions:
true" in mox.conf.
if you named the mox user something else than mox, e.g. "_mox", add "User: _mox"
to mox.conf.
if you created a shared service user as originally suggested, you may want to
get rid of that as it is no longer useful and may get in the way. e.g. if you
had /home/service/mox with a "service" user, that service user can no longer
access any files: only mox and root can.
this also adds scripts for building mox docker images for alpine-supported
platforms.
the "restart" subcommand has been removed. it wasn't all that useful and got in
the way.
and another change: when adding a domain while mtasts isn't enabled, don't add
the per-domain mtasts config, as it would cause failure to add the domain.
based on report from setting up mox on openbsd from mteege.
and based on issue #3. thanks for the feedback!
2023-02-27 14:19:55 +03:00
// Only enable mta-sts for domain if there is a listener with mta-sts.
var withMTASTS bool
for _ , l := range Conf . Static . Listeners {
if l . MTASTSHTTPS . Enabled {
withMTASTS = true
break
}
}
confDomain , cleanupFiles , err := MakeDomainConfig ( ctx , domain , Conf . Static . HostnameDomain , accountName , withMTASTS )
2023-01-30 16:27:06 +03:00
if err != nil {
return fmt . Errorf ( "preparing domain config: %v" , err )
}
defer func ( ) {
for _ , f := range cleanupFiles {
2023-02-16 15:22:00 +03:00
err := os . Remove ( f )
2023-12-05 15:35:58 +03:00
log . Check ( err , "cleaning up file after error" , slog . String ( "path" , f ) )
2023-01-30 16:27:06 +03:00
}
} ( )
if _ , ok := c . Accounts [ accountName ] ; ok && localpart != "" {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account already exists (leave localpart empty when using an existing account)" , ErrRequest )
2023-01-30 16:27:06 +03:00
} else if ! ok && localpart == "" {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account does not yet exist (specify a localpart)" , ErrRequest )
2023-01-30 16:27:06 +03:00
} else if accountName == "" {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account name is empty" , ErrRequest )
2023-01-30 16:27:06 +03:00
} else if ! ok {
2024-05-09 22:26:22 +03:00
nc . Accounts [ accountName ] = MakeAccountConfig ( smtp . NewAddress ( localpart , domain ) )
2023-04-24 13:04:46 +03:00
} else if accountName != Conf . Static . Postmaster . Account {
nacc := nc . Accounts [ accountName ]
nd := map [ string ] config . Destination { }
for k , v := range nacc . Destinations {
nd [ k ] = v
}
2024-05-09 22:26:22 +03:00
pmaddr := smtp . NewAddress ( "postmaster" , domain )
2023-04-24 13:04:46 +03:00
nd [ pmaddr . String ( ) ] = config . Destination { }
nacc . Destinations = nd
nc . Accounts [ accountName ] = nacc
2023-01-30 16:27:06 +03:00
}
nc . Domains [ domain . Name ( ) ] = confDomain
2023-02-16 15:22:00 +03:00
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
2023-12-05 15:35:58 +03:00
log . Info ( "domain added" , slog . Any ( "domain" , domain ) )
2023-01-30 16:27:06 +03:00
cleanupFiles = nil // All good, don't cleanup.
return nil
}
// DomainRemove removes domain from the config, rewriting domains.conf.
//
// No accounts are removed, also not when they still reference this domain.
func DomainRemove ( ctx context . Context , domain dns . Domain ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "removing domain" , rerr , slog . Any ( "domain" , domain ) )
2023-01-30 16:27:06 +03:00
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
c := Conf . Dynamic
domConf , ok := c . Domains [ domain . Name ( ) ]
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: domain does not exist" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Domains = map [ string ] config . Domain { }
s := domain . Name ( )
for name , d := range c . Domains {
if name != s {
nc . Domains [ name ] = d
}
}
2023-02-16 15:22:00 +03:00
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
// Move away any DKIM private keys to a subdirectory "old". But only if
// they are not in use by other domains.
2024-04-19 11:23:53 +03:00
usedKeyPaths := gatherUsedKeysPaths ( nc )
moveAwayKeys ( log , domConf . DKIM . Selectors , usedKeyPaths )
log . Info ( "domain removed" , slog . Any ( "domain" , domain ) )
return nil
}
func gatherUsedKeysPaths ( nc config . Dynamic ) map [ string ] bool {
2023-01-30 16:27:06 +03:00
usedKeyPaths := map [ string ] bool { }
for _ , dc := range nc . Domains {
for _ , sel := range dc . DKIM . Selectors {
usedKeyPaths [ filepath . Clean ( sel . PrivateKeyFile ) ] = true
}
}
2024-04-19 11:23:53 +03:00
return usedKeyPaths
}
func moveAwayKeys ( log mlog . Log , sels map [ string ] config . Selector , usedKeyPaths map [ string ] bool ) {
for _ , sel := range sels {
2023-01-30 16:27:06 +03:00
if sel . PrivateKeyFile == "" || usedKeyPaths [ filepath . Clean ( sel . PrivateKeyFile ) ] {
continue
}
src := ConfigDirPath ( sel . PrivateKeyFile )
dst := ConfigDirPath ( filepath . Join ( filepath . Dir ( sel . PrivateKeyFile ) , "old" , filepath . Base ( sel . PrivateKeyFile ) ) )
_ , err := os . Stat ( dst )
if err == nil {
err = fmt . Errorf ( "destination already exists" )
} else if os . IsNotExist ( err ) {
os . MkdirAll ( filepath . Dir ( dst ) , 0770 )
err = os . Rename ( src , dst )
}
if err != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "renaming dkim private key file for removed domain" , err , slog . String ( "src" , src ) , slog . String ( "dst" , dst ) )
2023-01-30 16:27:06 +03:00
}
}
}
2024-04-18 12:14:24 +03:00
// DomainSave calls xmodify with a shallow copy of the domain config. xmodify
// can modify the config, but must clone all referencing data it changes.
// xmodify may employ panic-based error handling. After xmodify returns, the
// modified config is verified, saved and takes effect.
2024-04-24 20:15:30 +03:00
func DomainSave ( ctx context . Context , domainName string , xmodify func ( config * config . Domain ) error ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
defer func ( ) {
if rerr != nil {
2024-04-18 12:14:24 +03:00
log . Errorx ( "saving domain config" , rerr )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
2024-04-18 12:14:24 +03:00
nc := Conf . Dynamic // Shallow copy.
dom , ok := nc . Domains [ domainName ] // dom is a shallow copy.
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: domain not present" , ErrRequest )
2024-04-18 12:14:24 +03:00
}
2024-04-24 20:15:30 +03:00
if err := xmodify ( & dom ) ; err != nil {
return err
}
2024-04-18 12:14:24 +03:00
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
2024-04-18 12:14:24 +03:00
nc . Domains = map [ string ] config . Domain { }
for name , d := range Conf . Dynamic . Domains {
nc . Domains [ name ] = d
}
nc . Domains [ domainName ] = dom
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
}
2024-04-18 12:14:24 +03:00
log . Info ( "domain saved" )
return nil
}
// ConfigSave calls xmodify with a shallow copy of the dynamic config. xmodify
// can modify the config, but must clone all referencing data it changes.
// xmodify may employ panic-based error handling. After xmodify returns, the
// modified config is verified, saved and takes effect.
func ConfigSave ( ctx context . Context , xmodify func ( config * config . Dynamic ) ) ( rerr error ) {
log := pkglog . WithContext ( ctx )
defer func ( ) {
if rerr != nil {
log . Errorx ( "saving config" , rerr )
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
nc := Conf . Dynamic // Shallow copy.
xmodify ( & nc )
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
return fmt . Errorf ( "writing domains.conf: %w" , err )
}
log . Info ( "config saved" )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
return nil
}
2023-01-30 16:27:06 +03:00
// todo: find a way to automatically create the dns records as it would greatly simplify setting up email for a domain. we could also dynamically make changes, e.g. providing grace periods after disabling a dkim key, only automatically removing the dkim dns key after a few days. but this requires some kind of api and authentication to the dns server. there doesn't appear to be a single commonly used api for dns management. each of the numerous cloud providers have their own APIs and rather large SKDs to use them. we don't want to link all of them in.
// DomainRecords returns text lines describing DNS records required for configuring
// a domain.
2023-12-21 17:16:30 +03:00
//
// If certIssuerDomainName is set, CAA records to limit TLS certificate issuance to
// that caID will be suggested. If acmeAccountURI is also set, CAA records also
// restricting issuance to that account ID will be suggested.
func DomainRecords ( domConf config . Domain , domain dns . Domain , hasDNSSEC bool , certIssuerDomainName , acmeAccountURI string ) ( [ ] string , error ) {
2023-01-30 16:27:06 +03:00
d := domain . ASCII
h := Conf . Static . HostnameDomain . ASCII
2023-12-21 17:16:30 +03:00
// The first line with ";" is used by ../testdata/integration/moxacmepebble.sh and
// ../testdata/integration/moxmail2.sh for selecting DNS records
2023-01-30 16:27:06 +03:00
records := [ ] string {
2023-02-27 17:04:32 +03:00
"; Time To Live of 5 minutes, may be recognized if importing as a zone file." ,
"; Once your setup is working, you may want to increase the TTL." ,
2023-01-30 16:27:06 +03:00
"$TTL 300" ,
"" ,
2023-08-25 15:32:28 +03:00
}
2023-01-30 16:27:06 +03:00
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
if public , ok := Conf . Static . Listeners [ "public" ] ; ok && public . TLS != nil && ( len ( public . TLS . HostPrivateRSA2048Keys ) > 0 || len ( public . TLS . HostPrivateECDSAP256Keys ) > 0 ) {
records = append ( records ,
2023-12-21 17:16:30 +03:00
` ; DANE: These records indicate that a remote mail server trying to deliver email ` ,
` ; with SMTP (TCP port 25) must verify the TLS certificate with DANE-EE (3), based ` ,
` ; on the certificate public key ("SPKI", 1) that is SHA2-256-hashed (1) to the ` ,
` ; hexadecimal hash. DANE-EE verification means only the certificate or public ` ,
` ; key is verified, not whether the certificate is signed by a (centralized) ` ,
` ; certificate authority (CA), is expired, or matches the host name. ` ,
` ; ` ,
` ; NOTE: Create the records below only once: They are for the machine, and apply ` ,
` ; to all hosted domains. ` ,
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
)
if ! hasDNSSEC {
records = append ( records ,
";" ,
"; WARNING: Domain does not appear to be DNSSEC-signed. To enable DANE, first" ,
"; enable DNSSEC on your domain, then add the TLSA records. Records below have been" ,
"; commented out." ,
)
}
addTLSA := func ( privKey crypto . Signer ) error {
spkiBuf , err := x509 . MarshalPKIXPublicKey ( privKey . Public ( ) )
if err != nil {
return fmt . Errorf ( "marshal SubjectPublicKeyInfo for DANE record: %v" , err )
}
sum := sha256 . Sum256 ( spkiBuf )
tlsaRecord := adns . TLSA {
Usage : adns . TLSAUsageDANEEE ,
Selector : adns . TLSASelectorSPKI ,
MatchType : adns . TLSAMatchTypeSHA256 ,
CertAssoc : sum [ : ] ,
}
var s string
if hasDNSSEC {
2023-10-13 09:16:46 +03:00
s = fmt . Sprintf ( "_25._tcp.%-*s TLSA %s" , 20 + len ( d ) - len ( "_25._tcp." ) , h + "." , tlsaRecord . Record ( ) )
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
} else {
2023-10-13 09:16:46 +03:00
s = fmt . Sprintf ( ";; _25._tcp.%-*s TLSA %s" , 20 + len ( d ) - len ( ";; _25._tcp." ) , h + "." , tlsaRecord . Record ( ) )
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
}
records = append ( records , s )
return nil
}
for _ , privKey := range public . TLS . HostPrivateECDSAP256Keys {
if err := addTLSA ( privKey ) ; err != nil {
return nil , err
}
}
for _ , privKey := range public . TLS . HostPrivateRSA2048Keys {
if err := addTLSA ( privKey ) ; err != nil {
return nil , err
}
}
records = append ( records , "" )
}
2023-08-25 15:32:28 +03:00
if d != h {
records = append ( records ,
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
"; For the machine, only needs to be created once, for the first domain added:" ,
"; " ,
"; SPF-allow host for itself, resulting in relaxed DMARC pass for (postmaster)" ,
"; messages (DSNs) sent from host:" ,
2023-10-13 09:16:46 +03:00
fmt . Sprintf ( ` %-*s TXT "v=spf1 a -all" ` , 20 + len ( d ) , h + "." ) , // ../rfc/7208:2263 ../rfc/7208:2287
2023-08-25 15:32:28 +03:00
"" ,
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
)
}
if d != h && Conf . Static . HostTLSRPT . ParsedLocalpart != "" {
uri := url . URL {
Scheme : "mailto" ,
Opaque : smtp . NewAddress ( Conf . Static . HostTLSRPT . ParsedLocalpart , Conf . Static . HostnameDomain ) . Pack ( false ) ,
}
2023-11-10 22:25:06 +03:00
tlsrptr := tlsrpt . Record { Version : "TLSRPTv1" , RUAs : [ ] [ ] tlsrpt . RUA { { tlsrpt . RUA ( uri . String ( ) ) } } }
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:40:46 +03:00
records = append ( records ,
"; For the machine, only needs to be created once, for the first domain added:" ,
"; " ,
"; Request reporting about success/failures of TLS connections to (MX) host, for DANE." ,
fmt . Sprintf ( ` _smtp._tls.%-*s TXT "%s" ` , 20 + len ( d ) - len ( "_smtp._tls." ) , h + "." , tlsrptr . String ( ) ) ,
"" ,
2023-08-25 15:32:28 +03:00
)
}
2023-02-03 17:54:34 +03:00
2023-08-25 15:32:28 +03:00
records = append ( records ,
2023-02-03 17:54:34 +03:00
"; Deliver email for the domain to this host." ,
2023-01-30 16:27:06 +03:00
fmt . Sprintf ( "%s. MX 10 %s." , d , h ) ,
"" ,
"; Outgoing messages will be signed with the first two DKIM keys. The other two" ,
"; configured for backup, switching to them is just a config change." ,
2023-08-25 15:32:28 +03:00
)
2023-01-30 16:27:06 +03:00
var selectors [ ] string
for name := range domConf . DKIM . Selectors {
selectors = append ( selectors , name )
}
sort . Slice ( selectors , func ( i , j int ) bool {
return selectors [ i ] < selectors [ j ]
} )
for _ , name := range selectors {
sel := domConf . DKIM . Selectors [ name ]
dkimr := dkim . Record {
Version : "DKIM1" ,
Hashes : [ ] string { "sha256" } ,
PublicKey : sel . Key . Public ( ) ,
}
if _ , ok := sel . Key . ( ed25519 . PrivateKey ) ; ok {
dkimr . Key = "ed25519"
} else if _ , ok := sel . Key . ( * rsa . PrivateKey ) ; ! ok {
return nil , fmt . Errorf ( "unrecognized private key for DKIM selector %q: %T" , name , sel . Key )
}
txt , err := dkimr . Record ( )
if err != nil {
return nil , fmt . Errorf ( "making DKIM DNS TXT record: %v" , err )
}
2023-11-22 16:02:24 +03:00
if len ( txt ) > 100 {
2023-01-30 16:27:06 +03:00
records = append ( records ,
2024-05-09 18:28:29 +03:00
"; NOTE: The following is a single long record split over several lines for use" ,
"; in zone files. When adding through a DNS operator web interface, combine the" ,
"; strings into a single string, without ()." ,
2023-01-30 16:27:06 +03:00
)
}
2023-10-13 09:16:46 +03:00
s := fmt . Sprintf ( "%s._domainkey.%s. TXT %s" , name , d , TXTStrings ( txt ) )
2023-01-30 16:27:06 +03:00
records = append ( records , s )
}
2023-08-23 15:27:21 +03:00
dmarcr := dmarc . DefaultRecord
dmarcr . Policy = "reject"
if domConf . DMARC != nil {
uri := url . URL {
Scheme : "mailto" ,
Opaque : smtp . NewAddress ( domConf . DMARC . ParsedLocalpart , domConf . DMARC . DNSDomain ) . Pack ( false ) ,
}
dmarcr . AggregateReportAddresses = [ ] dmarc . URI {
{ Address : uri . String ( ) , MaxSize : 10 , Unit : "m" } ,
}
}
2023-01-30 16:27:06 +03:00
records = append ( records ,
"" ,
"; Specify the MX host is allowed to send for our domain and for itself (for DSNs)." ,
"; ~all means softfail for anything else, which is done instead of -all to prevent older" ,
"; mail servers from rejecting the message because they never get to looking for a dkim/dmarc pass." ,
2023-10-13 09:16:46 +03:00
fmt . Sprintf ( ` %s. TXT "v=spf1 mx ~all" ` , d ) ,
2023-01-30 16:27:06 +03:00
"" ,
2023-08-23 15:27:21 +03:00
"; Emails that fail the DMARC check (without aligned DKIM and without aligned SPF)" ,
"; should be rejected, and request reports. If you email through mailing lists that" ,
"; strip DKIM-Signature headers and don't rewrite the From header, you may want to" ,
"; set the policy to p=none." ,
2023-10-13 09:16:46 +03:00
fmt . Sprintf ( ` _dmarc.%s. TXT "%s" ` , d , dmarcr . String ( ) ) ,
2023-01-30 16:27:06 +03:00
"" ,
)
if sts := domConf . MTASTS ; sts != nil {
records = append ( records ,
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
"; Remote servers can use MTA-STS to verify our TLS certificate with the" ,
"; WebPKI pool of CA's (certificate authorities) when delivering over SMTP with" ,
"; STARTTLSTLS." ,
2023-10-13 09:16:46 +03:00
fmt . Sprintf ( ` mta-sts.%s. CNAME %s. ` , d , h ) ,
fmt . Sprintf ( ` _mta-sts.%s. TXT "v=STSv1; id=%s" ` , d , sts . PolicyID ) ,
2023-01-30 16:27:06 +03:00
"" ,
)
2023-02-27 17:04:32 +03:00
} else {
records = append ( records ,
"; Note: No MTA-STS to indicate TLS should be used. Either because disabled for the" ,
"; domain or because mox.conf does not have a listener with MTA-STS configured." ,
"" ,
)
2023-01-30 16:27:06 +03:00
}
2023-08-23 15:27:21 +03:00
if domConf . TLSRPT != nil {
uri := url . URL {
Scheme : "mailto" ,
Opaque : smtp . NewAddress ( domConf . TLSRPT . ParsedLocalpart , domConf . TLSRPT . DNSDomain ) . Pack ( false ) ,
}
2023-11-10 22:25:06 +03:00
tlsrptr := tlsrpt . Record { Version : "TLSRPTv1" , RUAs : [ ] [ ] tlsrpt . RUA { { tlsrpt . RUA ( uri . String ( ) ) } } }
2023-08-23 15:27:21 +03:00
records = append ( records ,
"; Request reporting about TLS failures." ,
2023-10-13 09:16:46 +03:00
fmt . Sprintf ( ` _smtp._tls.%s. TXT "%s" ` , d , tlsrptr . String ( ) ) ,
2023-08-23 15:27:21 +03:00
"" ,
)
}
2023-01-30 16:27:06 +03:00
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
if domConf . ClientSettingsDomain != "" && domConf . ClientSettingsDNSDomain != Conf . Static . HostnameDomain {
records = append ( records ,
"; Client settings will reference a subdomain of the hosted domain, making it" ,
"; easier to migrate to a different server in the future by not requiring settings" ,
"; in all clients to be updated." ,
fmt . Sprintf ( ` %-*s CNAME %s. ` , 20 + len ( d ) , domConf . ClientSettingsDNSDomain . ASCII + "." , h ) ,
"" ,
)
}
2023-08-23 15:27:21 +03:00
records = append ( records ,
2023-01-30 16:27:06 +03:00
"; Autoconfig is used by Thunderbird. Autodiscover is (in theory) used by Microsoft." ,
2023-10-13 09:16:46 +03:00
fmt . Sprintf ( ` autoconfig.%s. CNAME %s. ` , d , h ) ,
2023-10-13 09:51:02 +03:00
fmt . Sprintf ( ` _autodiscover._tcp.%s. SRV 0 1 443 %s. ` , d , h ) ,
2023-01-30 16:27:06 +03:00
"" ,
// ../rfc/6186:133 ../rfc/8314:692
"; For secure IMAP and submission autoconfig, point to mail host." ,
2023-10-13 09:16:46 +03:00
fmt . Sprintf ( ` _imaps._tcp.%s. SRV 0 1 993 %s. ` , d , h ) ,
fmt . Sprintf ( ` _submissions._tcp.%s. SRV 0 1 465 %s. ` , d , h ) ,
2023-01-30 16:27:06 +03:00
"" ,
// ../rfc/6186:242
2023-02-27 17:04:32 +03:00
"; Next records specify POP3 and non-TLS ports are not to be used." ,
2023-02-05 23:25:48 +03:00
"; These are optional and safe to leave out (e.g. if you have to click a lot in a" ,
"; DNS admin web interface)." ,
2023-10-13 09:16:46 +03:00
fmt . Sprintf ( ` _imap._tcp.%s. SRV 0 1 143 . ` , d ) ,
fmt . Sprintf ( ` _submission._tcp.%s. SRV 0 1 587 . ` , d ) ,
fmt . Sprintf ( ` _pop3._tcp.%s. SRV 0 1 110 . ` , d ) ,
fmt . Sprintf ( ` _pop3s._tcp.%s. SRV 0 1 995 . ` , d ) ,
2023-01-30 16:27:06 +03:00
)
2023-12-21 17:16:30 +03:00
if certIssuerDomainName != "" {
// ../rfc/8659:18 for CAA records.
records = append ( records ,
"" ,
"; Optional:" ,
"; You could mark Let's Encrypt as the only Certificate Authority allowed to" ,
"; sign TLS certificates for your domain." ,
fmt . Sprintf ( ` %s. CAA 0 issue "%s" ` , d , certIssuerDomainName ) ,
)
if acmeAccountURI != "" {
// ../rfc/8657:99 for accounturi.
// ../rfc/8657:147 for validationmethods.
records = append ( records ,
";" ,
"; Optionally limit certificates for this domain to the account ID and methods used by mox." ,
fmt . Sprintf ( ` ;; %s. CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01" ` , d , certIssuerDomainName , acmeAccountURI ) ,
";" ,
"; Or alternatively only limit for email-specific subdomains, so you can use" ,
"; other accounts/methods for other subdomains." ,
fmt . Sprintf ( ` ;; autoconfig.%s. CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01" ` , d , certIssuerDomainName , acmeAccountURI ) ,
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
fmt . Sprintf ( ` ;; mta-sts.%s. CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01" ` , d , certIssuerDomainName , acmeAccountURI ) ,
2023-12-21 17:16:30 +03:00
)
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
if domConf . ClientSettingsDomain != "" && domConf . ClientSettingsDNSDomain != Conf . Static . HostnameDomain {
records = append ( records ,
fmt . Sprintf ( ` ;; %-*s CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01" ` , 20 - 3 + len ( d ) , domConf . ClientSettingsDNSDomain . ASCII , certIssuerDomainName , acmeAccountURI ) ,
)
}
2023-12-21 17:16:30 +03:00
if strings . HasSuffix ( h , "." + d ) {
records = append ( records ,
";" ,
"; And the mail hostname." ,
fmt . Sprintf ( ` ;; %-*s CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01" ` , 20 - 3 + len ( d ) , h + "." , certIssuerDomainName , acmeAccountURI ) ,
)
}
} else {
// The string "will be suggested" is used by
// ../testdata/integration/moxacmepebble.sh and ../testdata/integration/moxmail2.sh
// as end of DNS records.
records = append ( records ,
";" ,
"; Note: After starting up, once an ACME account has been created, CAA records" ,
"; that restrict issuance to the account will be suggested." ,
)
}
}
2023-01-30 16:27:06 +03:00
return records , nil
}
2023-09-23 13:05:40 +03:00
// AccountAdd adds an account and an initial address and reloads the configuration.
2023-01-30 16:27:06 +03:00
//
2023-03-29 11:55:05 +03:00
// The new account does not have a password, so cannot yet log in. Email can be
2023-01-30 16:27:06 +03:00
// delivered.
2023-03-29 22:11:43 +03:00
//
// Catchall addresses are not supported for AccountAdd. Add separately with AddressAdd.
2023-01-30 16:27:06 +03:00
func AccountAdd ( ctx context . Context , account , address string ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "adding account" , rerr , slog . String ( "account" , account ) , slog . String ( "address" , address ) )
2023-01-30 16:27:06 +03:00
}
} ( )
2023-03-29 11:55:05 +03:00
addr , err := smtp . ParseAddress ( address )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing email address: %v" , ErrRequest , err )
2023-03-29 11:55:05 +03:00
}
2023-01-30 16:27:06 +03:00
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
c := Conf . Dynamic
if _ , ok := c . Accounts [ account ] ; ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account already present" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
2023-03-29 11:55:05 +03:00
if err := checkAddressAvailable ( addr ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: address not available: %v" , ErrRequest , err )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Accounts = map [ string ] config . Account { }
for name , a := range c . Accounts {
nc . Accounts [ name ] = a
}
nc . Accounts [ account ] = MakeAccountConfig ( addr )
2023-02-16 15:22:00 +03:00
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
2023-12-05 15:35:58 +03:00
log . Info ( "account added" , slog . String ( "account" , account ) , slog . Any ( "address" , addr ) )
2023-01-30 16:27:06 +03:00
return nil
}
// AccountRemove removes an account and reloads the configuration.
func AccountRemove ( ctx context . Context , account string ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "adding account" , rerr , slog . String ( "account" , account ) )
2023-01-30 16:27:06 +03:00
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
c := Conf . Dynamic
if _ , ok := c . Accounts [ account ] ; ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account does not exist" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Accounts = map [ string ] config . Account { }
for name , a := range c . Accounts {
if name != account {
nc . Accounts [ name ] = a
}
}
2023-02-16 15:22:00 +03:00
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
2024-05-09 17:26:08 +03:00
odir := filepath . Join ( DataDirPath ( "accounts" ) , account )
tmpdir := filepath . Join ( DataDirPath ( "tmp" ) , "oldaccount-" + account )
if err := os . Rename ( odir , tmpdir ) ; err != nil {
log . Errorx ( "moving old account data directory out of the way" , err , slog . String ( "account" , account ) )
return fmt . Errorf ( "account removed, but account data directory %q could not be moved out of the way: %v" , odir , err )
}
if err := os . RemoveAll ( tmpdir ) ; err != nil {
log . Errorx ( "removing old account data directory" , err , slog . String ( "account" , account ) )
return fmt . Errorf ( "account removed, its data directory moved to %q, but removing failed: %v" , odir , err )
}
2023-12-05 15:35:58 +03:00
log . Info ( "account removed" , slog . String ( "account" , account ) )
2023-01-30 16:27:06 +03:00
return nil
}
2023-03-29 11:55:05 +03:00
// checkAddressAvailable checks that the address after canonicalization is not
// already configured, and that its localpart does not contain the catchall
// localpart separator.
//
// Must be called with config lock held.
func checkAddressAvailable ( addr smtp . Address ) error {
2024-04-24 20:15:30 +03:00
dc , ok := Conf . Dynamic . Domains [ addr . Domain . Name ( ) ]
if ! ok {
2023-03-29 11:55:05 +03:00
return fmt . Errorf ( "domain does not exist" )
2024-04-24 20:15:30 +03:00
}
lp := CanonicalLocalpart ( addr . Localpart , dc )
if _ , ok := Conf . accountDestinations [ smtp . NewAddress ( lp , addr . Domain ) . String ( ) ] ; ok {
2023-03-29 11:55:05 +03:00
return fmt . Errorf ( "canonicalized address %s already configured" , smtp . NewAddress ( lp , addr . Domain ) )
} else if dc . LocalpartCatchallSeparator != "" && strings . Contains ( string ( addr . Localpart ) , dc . LocalpartCatchallSeparator ) {
return fmt . Errorf ( "localpart cannot include domain catchall separator %s" , dc . LocalpartCatchallSeparator )
2024-04-24 20:15:30 +03:00
} else if _ , ok := dc . Aliases [ lp . String ( ) ] ; ok {
return fmt . Errorf ( "address in use as alias" )
2023-03-29 11:55:05 +03:00
}
return nil
}
2023-03-29 22:11:43 +03:00
// AddressAdd adds an email address to an account and reloads the configuration. If
// address starts with an @ it is treated as a catchall address for the domain.
2023-01-30 16:27:06 +03:00
func AddressAdd ( ctx context . Context , address , account string ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "adding address" , rerr , slog . String ( "address" , address ) , slog . String ( "account" , account ) )
2023-01-30 16:27:06 +03:00
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
c := Conf . Dynamic
a , ok := c . Accounts [ account ]
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account does not exist" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
2023-03-29 22:11:43 +03:00
var destAddr string
if strings . HasPrefix ( address , "@" ) {
d , err := dns . ParseDomain ( address [ 1 : ] )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing domain: %v" , ErrRequest , err )
2023-03-29 22:11:43 +03:00
}
dname := d . Name ( )
destAddr = "@" + dname
if _ , ok := Conf . Dynamic . Domains [ dname ] ; ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: domain does not exist" , ErrRequest )
2023-03-29 22:11:43 +03:00
} else if _ , ok := Conf . accountDestinations [ destAddr ] ; ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: catchall address already configured for domain" , ErrRequest )
2023-03-29 22:11:43 +03:00
}
} else {
addr , err := smtp . ParseAddress ( address )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing email address: %v" , ErrRequest , err )
2023-03-29 22:11:43 +03:00
}
if err := checkAddressAvailable ( addr ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: address not available: %v" , ErrRequest , err )
2023-03-29 22:11:43 +03:00
}
destAddr = addr . String ( )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Accounts = map [ string ] config . Account { }
for name , a := range c . Accounts {
nc . Accounts [ name ] = a
}
nd := map [ string ] config . Destination { }
for name , d := range a . Destinations {
nd [ name ] = d
}
2023-03-29 22:11:43 +03:00
nd [ destAddr ] = config . Destination { }
2023-01-30 16:27:06 +03:00
a . Destinations = nd
nc . Accounts [ account ] = a
2023-02-16 15:22:00 +03:00
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
2023-12-05 15:35:58 +03:00
log . Info ( "address added" , slog . String ( "address" , address ) , slog . String ( "account" , account ) )
2023-01-30 16:27:06 +03:00
return nil
}
// AddressRemove removes an email address and reloads the configuration.
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// Address can be a catchall address for the domain of the form "@<domain>".
2024-04-28 12:44:51 +03:00
//
// If the address is member of an alias, remove it from from the alias, unless it
// is the last member.
2023-01-30 16:27:06 +03:00
func AddressRemove ( ctx context . Context , address string ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "removing address" , rerr , slog . String ( "address" , address ) )
2023-01-30 16:27:06 +03:00
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
2023-03-29 22:11:43 +03:00
ad , ok := Conf . accountDestinations [ address ]
2023-01-30 16:27:06 +03:00
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: address does not exists" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
2023-03-29 22:11:43 +03:00
a , ok := Conf . Dynamic . Accounts [ ad . Account ]
2023-01-30 16:27:06 +03:00
if ! ok {
return fmt . Errorf ( "internal error: cannot find account" )
}
na := a
na . Destinations = map [ string ] config . Destination { }
var dropped bool
2023-03-29 22:11:43 +03:00
for destAddr , d := range a . Destinations {
if destAddr != address {
na . Destinations [ destAddr ] = d
2023-01-30 16:27:06 +03:00
} else {
dropped = true
}
}
if ! dropped {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: address not removed, likely a postmaster/reporting address" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// Also remove matching address from FromIDLoginAddresses, composing a new slice.
var fromIDLoginAddresses [ ] string
var dom dns . Domain
var pa smtp . Address // For non-catchall addresses (most).
var err error
if strings . HasPrefix ( address , "@" ) {
dom , err = dns . ParseDomain ( address [ 1 : ] )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing domain for catchall address: %v" , ErrRequest , err )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
} else {
pa , err = smtp . ParseAddress ( address )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing address: %v" , ErrRequest , err )
2023-02-11 01:47:19 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
dom = pa . Domain
2023-02-11 01:47:19 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
for i , fa := range a . ParsedFromIDLoginAddresses {
if fa . Domain != dom {
// Keep for different domain.
fromIDLoginAddresses = append ( fromIDLoginAddresses , a . FromIDLoginAddresses [ i ] )
continue
}
if strings . HasPrefix ( address , "@" ) {
continue
}
dc , ok := Conf . Dynamic . Domains [ dom . Name ( ) ]
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: unknown domain in fromid login address %q" , ErrRequest , fa . Pack ( true ) )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
}
2024-04-24 20:15:30 +03:00
flp := CanonicalLocalpart ( fa . Localpart , dc )
alp := CanonicalLocalpart ( pa . Localpart , dc )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
if alp != flp {
// Keep for different localpart.
fromIDLoginAddresses = append ( fromIDLoginAddresses , a . FromIDLoginAddresses [ i ] )
}
2023-02-11 01:47:19 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
na . FromIDLoginAddresses = fromIDLoginAddresses
2023-02-11 01:47:19 +03:00
2024-04-28 12:44:51 +03:00
// And remove as member from aliases configured in domains.
domains := maps . Clone ( Conf . Dynamic . Domains )
for _ , aa := range na . Aliases {
if aa . SubscriptionAddress != address {
continue
}
aliasAddr := fmt . Sprintf ( "%s@%s" , aa . Alias . LocalpartStr , aa . Alias . Domain . Name ( ) )
dom , ok := Conf . Dynamic . Domains [ aa . Alias . Domain . Name ( ) ]
if ! ok {
return fmt . Errorf ( "cannot find domain for alias %s" , aliasAddr )
}
a , ok := dom . Aliases [ aa . Alias . LocalpartStr ]
if ! ok {
return fmt . Errorf ( "cannot find alias %s" , aliasAddr )
}
a . Addresses = slices . Clone ( a . Addresses )
a . Addresses = slices . DeleteFunc ( a . Addresses , func ( v string ) bool { return v == address } )
if len ( a . Addresses ) == 0 {
return fmt . Errorf ( "address is last member of alias %s, add new members or remove alias first" , aliasAddr )
}
a . ParsedAddresses = nil // Filled when parsing config.
dom . Aliases = maps . Clone ( dom . Aliases )
dom . Aliases [ aa . Alias . LocalpartStr ] = a
domains [ aa . Alias . Domain . Name ( ) ] = dom
}
na . Aliases = nil // Filled when parsing config.
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
nc := Conf . Dynamic
2023-02-11 01:47:19 +03:00
nc . Accounts = map [ string ] config . Account { }
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
for name , a := range Conf . Dynamic . Accounts {
2023-02-11 01:47:19 +03:00
nc . Accounts [ name ] = a
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
nc . Accounts [ ad . Account ] = na
2024-04-28 12:44:51 +03:00
nc . Domains = domains
2023-02-11 01:47:19 +03:00
2023-02-16 15:22:00 +03:00
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-02-11 01:47:19 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
log . Info ( "address removed" , slog . String ( "address" , address ) , slog . String ( "account" , ad . Account ) )
2023-02-11 01:47:19 +03:00
return nil
}
2024-04-24 20:15:30 +03:00
func AliasAdd ( ctx context . Context , addr smtp . Address , alias config . Alias ) error {
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
if _ , ok := d . Aliases [ addr . Localpart . String ( ) ] ; ok {
return fmt . Errorf ( "%w: alias already present" , ErrRequest )
}
if d . Aliases == nil {
d . Aliases = map [ string ] config . Alias { }
}
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = alias
return nil
} )
}
func AliasUpdate ( ctx context . Context , addr smtp . Address , alias config . Alias ) error {
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
a , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "%w: alias does not exist" , ErrRequest )
}
a . PostPublic = alias . PostPublic
a . ListMembers = alias . ListMembers
a . AllowMsgFrom = alias . AllowMsgFrom
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = a
return nil
} )
}
func AliasRemove ( ctx context . Context , addr smtp . Address ) error {
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
_ , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "%w: alias does not exist" , ErrRequest )
}
d . Aliases = maps . Clone ( d . Aliases )
delete ( d . Aliases , addr . Localpart . String ( ) )
return nil
} )
}
func AliasAddressesAdd ( ctx context . Context , addr smtp . Address , addresses [ ] string ) error {
if len ( addresses ) == 0 {
return fmt . Errorf ( "%w: at least one address required" , ErrRequest )
}
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
alias , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "%w: no such alias" , ErrRequest )
}
alias . Addresses = append ( slices . Clone ( alias . Addresses ) , addresses ... )
alias . ParsedAddresses = nil
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = alias
return nil
} )
}
func AliasAddressesRemove ( ctx context . Context , addr smtp . Address , addresses [ ] string ) error {
if len ( addresses ) == 0 {
return fmt . Errorf ( "%w: need at least one address" , ErrRequest )
}
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
alias , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "%w: no such alias" , ErrRequest )
}
alias . Addresses = slices . DeleteFunc ( slices . Clone ( alias . Addresses ) , func ( addr string ) bool {
n := len ( addresses )
addresses = slices . DeleteFunc ( addresses , func ( a string ) bool { return a == addr } )
return n > len ( addresses )
} )
if len ( addresses ) > 0 {
return fmt . Errorf ( "%w: address not found: %s" , ErrRequest , strings . Join ( addresses , ", " ) )
}
alias . ParsedAddresses = nil
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = alias
return nil
} )
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// AccountSave updates the configuration of an account. Function xmodify is called
// with a shallow copy of the current configuration of the account. It must not
// change referencing fields (e.g. existing slice/map/pointer), they may still be
// in use, and the change may be rolled back. Referencing values must be copied and
// replaced by the modify. The function may raise a panic for error handling.
func AccountSave ( ctx context . Context , account string , xmodify func ( acc * config . Account ) ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-03-28 21:50:36 +03:00
defer func ( ) {
if rerr != nil {
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
log . Errorx ( "saving account fields" , rerr , slog . String ( "account" , account ) )
2023-03-28 21:50:36 +03:00
}
} ( )
Conf . dynamicMutex . Lock ( )
defer Conf . dynamicMutex . Unlock ( )
c := Conf . Dynamic
acc , ok := c . Accounts [ account ]
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account not present" , ErrRequest )
2023-03-28 21:50:36 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
xmodify ( & acc )
2023-03-28 21:50:36 +03:00
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Accounts = map [ string ] config . Account { }
for name , a := range c . Accounts {
nc . Accounts [ name ] = a
}
nc . Accounts [ account ] = acc
if err := writeDynamic ( ctx , log , nc ) ; err != nil {
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-03-28 21:50:36 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
log . Info ( "account fields saved" , slog . String ( "account" , account ) )
2023-03-28 21:50:36 +03:00
return nil
}
2023-09-23 13:05:40 +03:00
type TLSMode uint8
const (
TLSModeImmediate TLSMode = 0
TLSModeSTARTTLS TLSMode = 1
TLSModeNone TLSMode = 2
)
type ProtocolConfig struct {
Host dns . Domain
Port int
TLSMode TLSMode
}
2023-01-30 16:27:06 +03:00
type ClientConfig struct {
2023-09-23 13:05:40 +03:00
IMAP ProtocolConfig
Submission ProtocolConfig
}
// ClientConfigDomain returns a single IMAP and Submission client configuration for
// a domain.
func ClientConfigDomain ( d dns . Domain ) ( rconfig ClientConfig , rerr error ) {
var haveIMAP , haveSubmission bool
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
domConf , ok := Conf . Domain ( d )
if ! ok {
2024-04-19 11:23:53 +03:00
return ClientConfig { } , fmt . Errorf ( "%w: unknown domain" , ErrRequest )
2023-09-23 13:05:40 +03:00
}
gather := func ( l config . Listener ) ( done bool ) {
host := Conf . Static . HostnameDomain
if l . Hostname != "" {
host = l . HostnameDomain
}
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
if domConf . ClientSettingsDomain != "" {
host = domConf . ClientSettingsDNSDomain
}
2023-09-23 13:05:40 +03:00
if ! haveIMAP && l . IMAPS . Enabled {
rconfig . IMAP . Host = host
rconfig . IMAP . Port = config . Port ( l . IMAPS . Port , 993 )
rconfig . IMAP . TLSMode = TLSModeImmediate
haveIMAP = true
}
if ! haveIMAP && l . IMAP . Enabled {
rconfig . IMAP . Host = host
rconfig . IMAP . Port = config . Port ( l . IMAP . Port , 143 )
rconfig . IMAP . TLSMode = TLSModeSTARTTLS
if l . TLS == nil {
rconfig . IMAP . TLSMode = TLSModeNone
}
haveIMAP = true
}
if ! haveSubmission && l . Submissions . Enabled {
rconfig . Submission . Host = host
rconfig . Submission . Port = config . Port ( l . Submissions . Port , 465 )
rconfig . Submission . TLSMode = TLSModeImmediate
haveSubmission = true
}
if ! haveSubmission && l . Submission . Enabled {
rconfig . Submission . Host = host
rconfig . Submission . Port = config . Port ( l . Submission . Port , 587 )
rconfig . Submission . TLSMode = TLSModeSTARTTLS
if l . TLS == nil {
rconfig . Submission . TLSMode = TLSModeNone
}
haveSubmission = true
}
return haveIMAP && haveSubmission
}
// Look at the public listener first. Most likely the intended configuration.
if public , ok := Conf . Static . Listeners [ "public" ] ; ok {
if gather ( public ) {
return
}
}
// Go through the other listeners in consistent order.
names := maps . Keys ( Conf . Static . Listeners )
sort . Strings ( names )
for _ , name := range names {
if gather ( Conf . Static . Listeners [ name ] ) {
return
}
}
2024-04-19 11:23:53 +03:00
return ClientConfig { } , fmt . Errorf ( "%w: no listeners found for imap and/or submission" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
2023-09-23 13:05:40 +03:00
// ClientConfigs holds the client configuration for IMAP/Submission for a
// domain.
type ClientConfigs struct {
Entries [ ] ClientConfigsEntry
}
type ClientConfigsEntry struct {
2023-01-30 16:27:06 +03:00
Protocol string
Host dns . Domain
Port int
Listener string
Note string
}
2023-09-23 13:05:40 +03:00
// ClientConfigsDomain returns the client configs for IMAP/Submission for a
2023-01-30 16:27:06 +03:00
// domain.
2023-09-23 13:05:40 +03:00
func ClientConfigsDomain ( d dns . Domain ) ( ClientConfigs , error ) {
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
domConf , ok := Conf . Domain ( d )
2023-01-30 16:27:06 +03:00
if ! ok {
2024-04-19 11:23:53 +03:00
return ClientConfigs { } , fmt . Errorf ( "%w: unknown domain" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
2023-09-23 13:05:40 +03:00
c := ClientConfigs { }
c . Entries = [ ] ClientConfigsEntry { }
2023-01-30 16:27:06 +03:00
var listeners [ ] string
for name := range Conf . Static . Listeners {
listeners = append ( listeners , name )
}
sort . Slice ( listeners , func ( i , j int ) bool {
return listeners [ i ] < listeners [ j ]
} )
note := func ( tls bool , requiretls bool ) string {
if ! tls {
return "plain text, no STARTTLS configured"
}
if requiretls {
return "STARTTLS required"
}
return "STARTTLS optional"
}
for _ , name := range listeners {
l := Conf . Static . Listeners [ name ]
host := Conf . Static . HostnameDomain
if l . Hostname != "" {
host = l . HostnameDomain
}
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
if domConf . ClientSettingsDomain != "" {
host = domConf . ClientSettingsDNSDomain
}
2023-01-30 16:27:06 +03:00
if l . Submissions . Enabled {
2023-09-23 13:05:40 +03:00
c . Entries = append ( c . Entries , ClientConfigsEntry { "Submission (SMTP)" , host , config . Port ( l . Submissions . Port , 465 ) , name , "with TLS" } )
2023-01-30 16:27:06 +03:00
}
if l . IMAPS . Enabled {
2023-09-23 13:05:40 +03:00
c . Entries = append ( c . Entries , ClientConfigsEntry { "IMAP" , host , config . Port ( l . IMAPS . Port , 993 ) , name , "with TLS" } )
2023-01-30 16:27:06 +03:00
}
if l . Submission . Enabled {
2023-09-23 13:05:40 +03:00
c . Entries = append ( c . Entries , ClientConfigsEntry { "Submission (SMTP)" , host , config . Port ( l . Submission . Port , 587 ) , name , note ( l . TLS != nil , ! l . Submission . NoRequireSTARTTLS ) } )
2023-01-30 16:27:06 +03:00
}
if l . IMAP . Enabled {
2023-09-23 13:05:40 +03:00
c . Entries = append ( c . Entries , ClientConfigsEntry { "IMAP" , host , config . Port ( l . IMAPS . Port , 143 ) , name , note ( l . TLS != nil , ! l . IMAP . NoRequireSTARTTLS ) } )
2023-01-30 16:27:06 +03:00
}
}
return c , nil
}
2023-08-11 11:13:17 +03:00
// IPs returns ip addresses we may be listening/receiving mail on or
// connecting/sending from to the outside.
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
func IPs ( ctx context . Context , receiveOnly bool ) ( [ ] net . IP , error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
// Try to gather all IPs we are listening on by going through the config.
// If we encounter 0.0.0.0 or ::, we'll gather all local IPs afterwards.
var ips [ ] net . IP
var ipv4all , ipv6all bool
for _ , l := range Conf . Static . Listeners {
2023-03-09 17:24:06 +03:00
// If NATed, we don't know our external IPs.
if l . IPsNATed {
return nil , nil
}
2023-08-11 11:13:17 +03:00
check := l . IPs
if len ( l . NATIPs ) > 0 {
check = l . NATIPs
}
for _ , s := range check {
2023-01-30 16:27:06 +03:00
ip := net . ParseIP ( s )
if ip . IsUnspecified ( ) {
if ip . To4 ( ) != nil {
ipv4all = true
} else {
ipv6all = true
}
continue
}
ips = append ( ips , ip )
}
}
// We'll list the IPs on the interfaces. How useful is this? There is a good chance
2023-02-05 23:25:48 +03:00
// we're listening on all addresses because of a load balancer/firewall.
2023-01-30 16:27:06 +03:00
if ipv4all || ipv6all {
ifaces , err := net . Interfaces ( )
if err != nil {
return nil , fmt . Errorf ( "listing network interfaces: %v" , err )
}
for _ , iface := range ifaces {
if iface . Flags & net . FlagUp == 0 {
continue
}
addrs , err := iface . Addrs ( )
if err != nil {
return nil , fmt . Errorf ( "listing addresses for network interface: %v" , err )
}
if len ( addrs ) == 0 {
continue
}
for _ , addr := range addrs {
ip , _ , err := net . ParseCIDR ( addr . String ( ) )
if err != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "bad interface addr" , err , slog . Any ( "address" , addr ) )
2023-01-30 16:27:06 +03:00
continue
}
v4 := ip . To4 ( ) != nil
if ipv4all && v4 || ipv6all && ! v4 {
ips = append ( ips , ip )
}
}
}
}
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
if receiveOnly {
return ips , nil
}
for _ , t := range Conf . Static . Transports {
if t . Socks != nil {
ips = append ( ips , t . Socks . IPs ... )
}
}
2023-01-30 16:27:06 +03:00
return ips , nil
}