2024-12-03 00:03:18 +03:00
package admin
2023-01-30 16:27:06 +03:00
import (
"bytes"
"context"
"crypto/ed25519"
cryptorand "crypto/rand"
"crypto/rsa"
"crypto/x509"
"encoding/pem"
2024-04-19 11:23:53 +03:00
"errors"
2023-01-30 16:27:06 +03:00
"fmt"
2024-02-08 16:49:01 +03:00
"log/slog"
2023-01-30 16:27:06 +03:00
"os"
"path/filepath"
2024-04-24 20:15:30 +03:00
"slices"
2023-03-29 11:55:05 +03:00
"strings"
2023-01-30 16:27:06 +03:00
"time"
2023-09-23 13:05:40 +03:00
"golang.org/x/exp/maps"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/junk"
2024-04-19 11:23:53 +03:00
"github.com/mjl-/mox/mlog"
2024-12-03 00:03:18 +03:00
"github.com/mjl-/mox/mox-"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/mtasts"
"github.com/mjl-/mox/smtp"
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client. with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.
for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.
for smtp (submission), we don't require an explicit auth command.
both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.
only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.
users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.
the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.
there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
"github.com/mjl-/mox/store"
2023-01-30 16:27:06 +03:00
)
2024-12-03 00:03:18 +03:00
var pkglog = mlog . New ( "admin" , nil )
2023-10-13 10:14:42 +03:00
2024-12-03 00:03:18 +03:00
var ErrRequest = errors . New ( "bad request" )
2023-01-30 16:27:06 +03:00
// MakeDKIMEd25519Key returns a PEM buffer containing an ed25519 key for use
// with DKIM.
// selector and domain can be empty. If not, they are used in the note.
func MakeDKIMEd25519Key ( selector , domain dns . Domain ) ( [ ] byte , error ) {
_ , privKey , err := ed25519 . GenerateKey ( cryptorand . Reader )
if err != nil {
return nil , fmt . Errorf ( "generating key: %w" , err )
}
pkcs8 , err := x509 . MarshalPKCS8PrivateKey ( privKey )
if err != nil {
return nil , fmt . Errorf ( "marshal key: %w" , err )
}
block := & pem . Block {
Type : "PRIVATE KEY" ,
Headers : map [ string ] string {
"Note" : dkimKeyNote ( "ed25519" , selector , domain ) ,
} ,
Bytes : pkcs8 ,
}
b := & bytes . Buffer { }
if err := pem . Encode ( b , block ) ; err != nil {
return nil , fmt . Errorf ( "encoding pem: %w" , err )
}
return b . Bytes ( ) , nil
}
func dkimKeyNote ( kind string , selector , domain dns . Domain ) string {
s := kind + " dkim private key"
var zero dns . Domain
if selector != zero && domain != zero {
s += fmt . Sprintf ( " for %s._domainkey.%s" , selector . ASCII , domain . ASCII )
}
s += fmt . Sprintf ( ", generated by mox on %s" , time . Now ( ) . Format ( time . RFC3339 ) )
return s
}
2024-04-24 12:35:07 +03:00
// MakeDKIMRSAKey returns a PEM buffer containing an rsa key for use with
2023-01-30 16:27:06 +03:00
// DKIM.
// selector and domain can be empty. If not, they are used in the note.
func MakeDKIMRSAKey ( selector , domain dns . Domain ) ( [ ] byte , error ) {
// 2048 bits seems reasonable in 2022, 1024 is on the low side, larger
// keys may not fit in UDP DNS response.
privKey , err := rsa . GenerateKey ( cryptorand . Reader , 2048 )
if err != nil {
return nil , fmt . Errorf ( "generating key: %w" , err )
}
pkcs8 , err := x509 . MarshalPKCS8PrivateKey ( privKey )
if err != nil {
return nil , fmt . Errorf ( "marshal key: %w" , err )
}
block := & pem . Block {
Type : "PRIVATE KEY" ,
Headers : map [ string ] string {
2023-10-13 09:59:35 +03:00
"Note" : dkimKeyNote ( "rsa-2048" , selector , domain ) ,
2023-01-30 16:27:06 +03:00
} ,
Bytes : pkcs8 ,
}
b := & bytes . Buffer { }
if err := pem . Encode ( b , block ) ; err != nil {
return nil , fmt . Errorf ( "encoding pem: %w" , err )
}
return b . Bytes ( ) , nil
}
// MakeAccountConfig returns a new account configuration for an email address.
func MakeAccountConfig ( addr smtp . Address ) config . Account {
account := config . Account {
Domain : addr . Domain . Name ( ) ,
Destinations : map [ string ] config . Destination {
2023-03-10 00:07:37 +03:00
addr . String ( ) : { } ,
2023-01-30 16:27:06 +03:00
} ,
RejectsMailbox : "Rejects" ,
JunkFilter : & config . JunkFilter {
Threshold : 0.95 ,
Params : junk . Params {
Onegrams : true ,
MaxPower : .01 ,
TopWords : 10 ,
IgnoreWords : .1 ,
RareWords : 2 ,
} ,
} ,
}
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
account . AutomaticJunkFlags . Enabled = true
2023-02-13 12:47:20 +03:00
account . AutomaticJunkFlags . JunkMailboxRegexp = "^(junk|spam)"
account . AutomaticJunkFlags . NeutralMailboxRegexp = "^(inbox|neutral|postmaster|dmarc|tlsrpt|rejects)"
2023-01-30 16:27:06 +03:00
account . SubjectPass . Period = 12 * time . Hour
return account
}
2024-04-19 11:23:53 +03:00
func writeFile ( log mlog . Log , path string , data [ ] byte ) error {
os . MkdirAll ( filepath . Dir ( path ) , 0770 )
f , err := os . OpenFile ( path , os . O_WRONLY | os . O_CREATE | os . O_EXCL , 0660 )
if err != nil {
return fmt . Errorf ( "creating file %s: %s" , path , err )
}
defer func ( ) {
if f != nil {
err := f . Close ( )
log . Check ( err , "closing file after error" )
err = os . Remove ( path )
log . Check ( err , "removing file after error" , slog . String ( "path" , path ) )
}
} ( )
if _ , err := f . Write ( data ) ; err != nil {
return fmt . Errorf ( "writing file %s: %s" , path , err )
}
if err := f . Close ( ) ; err != nil {
return fmt . Errorf ( "close file: %v" , err )
}
f = nil
return nil
}
2023-01-30 16:27:06 +03:00
// MakeDomainConfig makes a new config for a domain, creating DKIM keys, using
// accountName for DMARC and TLS reports.
change mox to start as root, bind to network sockets, then drop to regular unprivileged mox user
makes it easier to run on bsd's, where you cannot (easily?) let non-root users
bind to ports <1024. starting as root also paves the way for future improvements
with privilege separation.
unfortunately, this requires changes to how you start mox. though mox will help
by automatically fix up dir/file permissions/ownership.
if you start mox from the systemd unit file, you should update it so it starts
as root and adds a few additional capabilities:
# first update the mox binary, then, as root:
./mox config printservice >mox.service
systemctl daemon-reload
systemctl restart mox
journalctl -f -u mox &
# you should see mox start up, with messages about fixing permissions on dirs/files.
if you used the recommended config/ and data/ directory, in a directory just for
mox, and with the mox user called "mox", this should be enough.
if you don't want mox to modify dir/file permissions, set "NoFixPermissions:
true" in mox.conf.
if you named the mox user something else than mox, e.g. "_mox", add "User: _mox"
to mox.conf.
if you created a shared service user as originally suggested, you may want to
get rid of that as it is no longer useful and may get in the way. e.g. if you
had /home/service/mox with a "service" user, that service user can no longer
access any files: only mox and root can.
this also adds scripts for building mox docker images for alpine-supported
platforms.
the "restart" subcommand has been removed. it wasn't all that useful and got in
the way.
and another change: when adding a domain while mtasts isn't enabled, don't add
the per-domain mtasts config, as it would cause failure to add the domain.
based on report from setting up mox on openbsd from mteege.
and based on issue #3. thanks for the feedback!
2023-02-27 14:19:55 +03:00
func MakeDomainConfig ( ctx context . Context , domain , hostname dns . Domain , accountName string , withMTASTS bool ) ( config . Domain , [ ] string , error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
now := time . Now ( )
year := now . Format ( "2006" )
timestamp := now . Format ( "20060102T150405" )
var paths [ ] string
defer func ( ) {
for _ , p := range paths {
2023-02-16 15:22:00 +03:00
err := os . Remove ( p )
2023-12-05 15:35:58 +03:00
log . Check ( err , "removing path for domain config" , slog . String ( "path" , p ) )
2023-01-30 16:27:06 +03:00
}
} ( )
confDKIM := config . DKIM {
Selectors : map [ string ] config . Selector { } ,
}
addSelector := func ( kind , name string , privKey [ ] byte ) error {
record := fmt . Sprintf ( "%s._domainkey.%s" , name , domain . ASCII )
2023-10-13 09:59:35 +03:00
keyPath := filepath . Join ( "dkim" , fmt . Sprintf ( "%s.%s.%s.privatekey.pkcs8.pem" , record , timestamp , kind ) )
2024-12-03 00:03:18 +03:00
p := mox . ConfigDynamicDirPath ( keyPath )
2024-04-19 11:23:53 +03:00
if err := writeFile ( log , p , privKey ) ; err != nil {
2023-01-30 16:27:06 +03:00
return err
}
paths = append ( paths , p )
confDKIM . Selectors [ name ] = config . Selector {
// Example from RFC has 5 day between signing and expiration. ../rfc/6376:1393
// Expiration is not intended as antireplay defense, but it may help. ../rfc/6376:1340
// Messages in the wild have been observed with 2 hours and 1 year expiration.
Expiration : "72h" ,
PrivateKeyFile : keyPath ,
}
return nil
}
addEd25519 := func ( name string ) error {
key , err := MakeDKIMEd25519Key ( dns . Domain { ASCII : name } , domain )
if err != nil {
return fmt . Errorf ( "making dkim ed25519 private key: %s" , err )
}
return addSelector ( "ed25519" , name , key )
}
addRSA := func ( name string ) error {
key , err := MakeDKIMRSAKey ( dns . Domain { ASCII : name } , domain )
if err != nil {
return fmt . Errorf ( "making dkim rsa private key: %s" , err )
}
2023-10-13 09:59:35 +03:00
return addSelector ( "rsa2048" , name , key )
2023-01-30 16:27:06 +03:00
}
if err := addEd25519 ( year + "a" ) ; err != nil {
return config . Domain { } , nil , err
}
if err := addRSA ( year + "b" ) ; err != nil {
return config . Domain { } , nil , err
}
if err := addEd25519 ( year + "c" ) ; err != nil {
return config . Domain { } , nil , err
}
if err := addRSA ( year + "d" ) ; err != nil {
return config . Domain { } , nil , err
}
// We sign with the first two. In case they are misused, the switch to the other
// keys is easy, just change the config. Operators should make the public key field
// of the misused keys empty in the DNS records to disable the misused keys.
confDKIM . Sign = [ ] string { year + "a" , year + "b" }
confDomain := config . Domain {
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 13:01:16 +03:00
ClientSettingsDomain : "mail." + domain . Name ( ) ,
2023-01-30 16:27:06 +03:00
LocalpartCatchallSeparator : "+" ,
DKIM : confDKIM ,
DMARC : & config . DMARC {
Account : accountName ,
Localpart : "dmarc-reports" ,
Mailbox : "DMARC" ,
} ,
TLSRPT : & config . TLSRPT {
Account : accountName ,
Localpart : "tls-reports" ,
Mailbox : "TLSRPT" ,
} ,
}
change mox to start as root, bind to network sockets, then drop to regular unprivileged mox user
makes it easier to run on bsd's, where you cannot (easily?) let non-root users
bind to ports <1024. starting as root also paves the way for future improvements
with privilege separation.
unfortunately, this requires changes to how you start mox. though mox will help
by automatically fix up dir/file permissions/ownership.
if you start mox from the systemd unit file, you should update it so it starts
as root and adds a few additional capabilities:
# first update the mox binary, then, as root:
./mox config printservice >mox.service
systemctl daemon-reload
systemctl restart mox
journalctl -f -u mox &
# you should see mox start up, with messages about fixing permissions on dirs/files.
if you used the recommended config/ and data/ directory, in a directory just for
mox, and with the mox user called "mox", this should be enough.
if you don't want mox to modify dir/file permissions, set "NoFixPermissions:
true" in mox.conf.
if you named the mox user something else than mox, e.g. "_mox", add "User: _mox"
to mox.conf.
if you created a shared service user as originally suggested, you may want to
get rid of that as it is no longer useful and may get in the way. e.g. if you
had /home/service/mox with a "service" user, that service user can no longer
access any files: only mox and root can.
this also adds scripts for building mox docker images for alpine-supported
platforms.
the "restart" subcommand has been removed. it wasn't all that useful and got in
the way.
and another change: when adding a domain while mtasts isn't enabled, don't add
the per-domain mtasts config, as it would cause failure to add the domain.
based on report from setting up mox on openbsd from mteege.
and based on issue #3. thanks for the feedback!
2023-02-27 14:19:55 +03:00
if withMTASTS {
confDomain . MTASTS = & config . MTASTS {
PolicyID : time . Now ( ) . UTC ( ) . Format ( "20060102T150405" ) ,
Mode : mtasts . ModeEnforce ,
// We start out with 24 hour, and warn in the admin interface that users should
// increase it to weeks once the setup works.
MaxAge : 24 * time . Hour ,
MX : [ ] string { hostname . ASCII } ,
}
}
2023-01-30 16:27:06 +03:00
rpaths := paths
paths = nil
return confDomain , rpaths , nil
}
2024-04-19 11:23:53 +03:00
// DKIMAdd adds a DKIM selector for a domain, generating a key and writing it to disk.
func DKIMAdd ( ctx context . Context , domain , selector dns . Domain , algorithm , hash string , headerRelaxed , bodyRelaxed , seal bool , headers [ ] string , lifetime time . Duration ) ( rerr error ) {
log := pkglog . WithContext ( ctx )
defer func ( ) {
if rerr != nil {
log . Errorx ( "adding dkim key" , rerr ,
slog . Any ( "domain" , domain ) ,
slog . Any ( "selector" , selector ) )
}
} ( )
switch hash {
case "sha256" , "sha1" :
default :
return fmt . Errorf ( "%w: unknown hash algorithm %q" , ErrRequest , hash )
}
var privKey [ ] byte
var err error
var kind string
switch algorithm {
case "rsa" :
privKey , err = MakeDKIMRSAKey ( selector , domain )
kind = "rsa2048"
case "ed25519" :
privKey , err = MakeDKIMEd25519Key ( selector , domain )
kind = "ed25519"
default :
err = fmt . Errorf ( "unknown algorithm" )
}
if err != nil {
return fmt . Errorf ( "%w: making dkim key: %v" , ErrRequest , err )
}
// Only take lock now, we don't want to hold it while generating a key.
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2024-04-19 11:23:53 +03:00
2024-12-03 00:03:18 +03:00
c := mox . Conf . Dynamic
2024-04-19 11:23:53 +03:00
d , ok := c . Domains [ domain . Name ( ) ]
if ! ok {
return fmt . Errorf ( "%w: domain does not exist" , ErrRequest )
}
if _ , ok := d . DKIM . Selectors [ selector . Name ( ) ] ; ok {
return fmt . Errorf ( "%w: selector already exists for domain" , ErrRequest )
}
record := fmt . Sprintf ( "%s._domainkey.%s" , selector . ASCII , domain . ASCII )
timestamp := time . Now ( ) . Format ( "20060102T150405" )
keyPath := filepath . Join ( "dkim" , fmt . Sprintf ( "%s.%s.%s.privatekey.pkcs8.pem" , record , timestamp , kind ) )
2024-12-03 00:03:18 +03:00
p := mox . ConfigDynamicDirPath ( keyPath )
2024-04-19 11:23:53 +03:00
if err := writeFile ( log , p , privKey ) ; err != nil {
return fmt . Errorf ( "writing key file: %v" , err )
}
removePath := p
defer func ( ) {
if removePath != "" {
err := os . Remove ( removePath )
log . Check ( err , "removing path for dkim key" , slog . String ( "path" , removePath ) )
}
} ( )
nsel := config . Selector {
Hash : hash ,
Canonicalization : config . Canonicalization {
HeaderRelaxed : headerRelaxed ,
BodyRelaxed : bodyRelaxed ,
} ,
Headers : headers ,
DontSealHeaders : ! seal ,
Expiration : lifetime . String ( ) ,
PrivateKeyFile : keyPath ,
}
// All good, time to update the config.
nd := d
nd . DKIM . Selectors = map [ string ] config . Selector { }
for name , osel := range d . DKIM . Selectors {
nd . DKIM . Selectors [ name ] = osel
}
nd . DKIM . Selectors [ selector . Name ( ) ] = nsel
nc := c
nc . Domains = map [ string ] config . Domain { }
for name , dom := range c . Domains {
nc . Domains [ name ] = dom
}
nc . Domains [ domain . Name ( ) ] = nd
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
}
log . Info ( "dkim key added" , slog . Any ( "domain" , domain ) , slog . Any ( "selector" , selector ) )
removePath = "" // Prevent cleanup of key file.
return nil
}
// DKIMRemove removes the selector from the domain, moving the key file out of the way.
func DKIMRemove ( ctx context . Context , domain , selector dns . Domain ) ( rerr error ) {
log := pkglog . WithContext ( ctx )
defer func ( ) {
if rerr != nil {
log . Errorx ( "removing dkim key" , rerr ,
slog . Any ( "domain" , domain ) ,
slog . Any ( "selector" , selector ) )
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2024-04-19 11:23:53 +03:00
2024-12-03 00:03:18 +03:00
c := mox . Conf . Dynamic
2024-04-19 11:23:53 +03:00
d , ok := c . Domains [ domain . Name ( ) ]
if ! ok {
return fmt . Errorf ( "%w: domain does not exist" , ErrRequest )
}
sel , ok := d . DKIM . Selectors [ selector . Name ( ) ]
if ! ok {
return fmt . Errorf ( "%w: selector does not exist for domain" , ErrRequest )
}
nsels := map [ string ] config . Selector { }
for name , sel := range d . DKIM . Selectors {
if name != selector . Name ( ) {
nsels [ name ] = sel
}
}
nsign := make ( [ ] string , 0 , len ( d . DKIM . Sign ) )
for _ , name := range d . DKIM . Sign {
if name != selector . Name ( ) {
nsign = append ( nsign , name )
}
}
nd := d
nd . DKIM = config . DKIM { Selectors : nsels , Sign : nsign }
nc := c
nc . Domains = map [ string ] config . Domain { }
for name , dom := range c . Domains {
nc . Domains [ name ] = dom
}
nc . Domains [ domain . Name ( ) ] = nd
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
}
// Move away a DKIM private key to a subdirectory "old". But only if
// not in use by other domains.
usedKeyPaths := gatherUsedKeysPaths ( nc )
moveAwayKeys ( log , map [ string ] config . Selector { selector . Name ( ) : sel } , usedKeyPaths )
log . Info ( "dkim key removed" , slog . Any ( "domain" , domain ) , slog . Any ( "selector" , selector ) )
return nil
}
2023-01-30 16:27:06 +03:00
// DomainAdd adds the domain to the domains config, rewriting domains.conf and
// marking it loaded.
//
2023-04-24 13:04:46 +03:00
// accountName is used for DMARC/TLS report and potentially for the postmaster address.
2023-01-30 16:27:06 +03:00
// If the account does not exist, it is created with localpart. Localpart must be
// set only if the account does not yet exist.
func DomainAdd ( ctx context . Context , domain dns . Domain , accountName string , localpart smtp . Localpart ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 18:06:50 +03:00
log . Errorx ( "adding domain" , rerr ,
slog . Any ( "domain" , domain ) ,
slog . String ( "account" , accountName ) ,
slog . Any ( "localpart" , localpart ) )
2023-01-30 16:27:06 +03:00
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2023-01-30 16:27:06 +03:00
2024-12-03 00:03:18 +03:00
c := mox . Conf . Dynamic
2023-01-30 16:27:06 +03:00
if _ , ok := c . Domains [ domain . Name ( ) ] ; ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: domain already present" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Domains = map [ string ] config . Domain { }
for name , d := range c . Domains {
nc . Domains [ name ] = d
}
change mox to start as root, bind to network sockets, then drop to regular unprivileged mox user
makes it easier to run on bsd's, where you cannot (easily?) let non-root users
bind to ports <1024. starting as root also paves the way for future improvements
with privilege separation.
unfortunately, this requires changes to how you start mox. though mox will help
by automatically fix up dir/file permissions/ownership.
if you start mox from the systemd unit file, you should update it so it starts
as root and adds a few additional capabilities:
# first update the mox binary, then, as root:
./mox config printservice >mox.service
systemctl daemon-reload
systemctl restart mox
journalctl -f -u mox &
# you should see mox start up, with messages about fixing permissions on dirs/files.
if you used the recommended config/ and data/ directory, in a directory just for
mox, and with the mox user called "mox", this should be enough.
if you don't want mox to modify dir/file permissions, set "NoFixPermissions:
true" in mox.conf.
if you named the mox user something else than mox, e.g. "_mox", add "User: _mox"
to mox.conf.
if you created a shared service user as originally suggested, you may want to
get rid of that as it is no longer useful and may get in the way. e.g. if you
had /home/service/mox with a "service" user, that service user can no longer
access any files: only mox and root can.
this also adds scripts for building mox docker images for alpine-supported
platforms.
the "restart" subcommand has been removed. it wasn't all that useful and got in
the way.
and another change: when adding a domain while mtasts isn't enabled, don't add
the per-domain mtasts config, as it would cause failure to add the domain.
based on report from setting up mox on openbsd from mteege.
and based on issue #3. thanks for the feedback!
2023-02-27 14:19:55 +03:00
// Only enable mta-sts for domain if there is a listener with mta-sts.
var withMTASTS bool
2024-12-03 00:03:18 +03:00
for _ , l := range mox . Conf . Static . Listeners {
change mox to start as root, bind to network sockets, then drop to regular unprivileged mox user
makes it easier to run on bsd's, where you cannot (easily?) let non-root users
bind to ports <1024. starting as root also paves the way for future improvements
with privilege separation.
unfortunately, this requires changes to how you start mox. though mox will help
by automatically fix up dir/file permissions/ownership.
if you start mox from the systemd unit file, you should update it so it starts
as root and adds a few additional capabilities:
# first update the mox binary, then, as root:
./mox config printservice >mox.service
systemctl daemon-reload
systemctl restart mox
journalctl -f -u mox &
# you should see mox start up, with messages about fixing permissions on dirs/files.
if you used the recommended config/ and data/ directory, in a directory just for
mox, and with the mox user called "mox", this should be enough.
if you don't want mox to modify dir/file permissions, set "NoFixPermissions:
true" in mox.conf.
if you named the mox user something else than mox, e.g. "_mox", add "User: _mox"
to mox.conf.
if you created a shared service user as originally suggested, you may want to
get rid of that as it is no longer useful and may get in the way. e.g. if you
had /home/service/mox with a "service" user, that service user can no longer
access any files: only mox and root can.
this also adds scripts for building mox docker images for alpine-supported
platforms.
the "restart" subcommand has been removed. it wasn't all that useful and got in
the way.
and another change: when adding a domain while mtasts isn't enabled, don't add
the per-domain mtasts config, as it would cause failure to add the domain.
based on report from setting up mox on openbsd from mteege.
and based on issue #3. thanks for the feedback!
2023-02-27 14:19:55 +03:00
if l . MTASTSHTTPS . Enabled {
withMTASTS = true
break
}
}
2024-12-03 00:03:18 +03:00
confDomain , cleanupFiles , err := MakeDomainConfig ( ctx , domain , mox . Conf . Static . HostnameDomain , accountName , withMTASTS )
2023-01-30 16:27:06 +03:00
if err != nil {
return fmt . Errorf ( "preparing domain config: %v" , err )
}
defer func ( ) {
for _ , f := range cleanupFiles {
2023-02-16 15:22:00 +03:00
err := os . Remove ( f )
2023-12-05 15:35:58 +03:00
log . Check ( err , "cleaning up file after error" , slog . String ( "path" , f ) )
2023-01-30 16:27:06 +03:00
}
} ( )
if _ , ok := c . Accounts [ accountName ] ; ok && localpart != "" {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account already exists (leave localpart empty when using an existing account)" , ErrRequest )
2023-01-30 16:27:06 +03:00
} else if ! ok && localpart == "" {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account does not yet exist (specify a localpart)" , ErrRequest )
2023-01-30 16:27:06 +03:00
} else if accountName == "" {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account name is empty" , ErrRequest )
2023-01-30 16:27:06 +03:00
} else if ! ok {
2024-05-09 22:26:22 +03:00
nc . Accounts [ accountName ] = MakeAccountConfig ( smtp . NewAddress ( localpart , domain ) )
2024-12-03 00:03:18 +03:00
} else if accountName != mox . Conf . Static . Postmaster . Account {
2023-04-24 13:04:46 +03:00
nacc := nc . Accounts [ accountName ]
nd := map [ string ] config . Destination { }
for k , v := range nacc . Destinations {
nd [ k ] = v
}
2024-05-09 22:26:22 +03:00
pmaddr := smtp . NewAddress ( "postmaster" , domain )
2023-04-24 13:04:46 +03:00
nd [ pmaddr . String ( ) ] = config . Destination { }
nacc . Destinations = nd
nc . Accounts [ accountName ] = nacc
2023-01-30 16:27:06 +03:00
}
nc . Domains [ domain . Name ( ) ] = confDomain
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
2023-12-05 15:35:58 +03:00
log . Info ( "domain added" , slog . Any ( "domain" , domain ) )
2023-01-30 16:27:06 +03:00
cleanupFiles = nil // All good, don't cleanup.
return nil
}
// DomainRemove removes domain from the config, rewriting domains.conf.
//
// No accounts are removed, also not when they still reference this domain.
func DomainRemove ( ctx context . Context , domain dns . Domain ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "removing domain" , rerr , slog . Any ( "domain" , domain ) )
2023-01-30 16:27:06 +03:00
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2023-01-30 16:27:06 +03:00
2024-12-03 00:03:18 +03:00
c := mox . Conf . Dynamic
2023-01-30 16:27:06 +03:00
domConf , ok := c . Domains [ domain . Name ( ) ]
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: domain does not exist" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client. with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.
for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.
for smtp (submission), we don't require an explicit auth command.
both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.
only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.
users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.
the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.
there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
// Check that the domain isn't referenced in a TLS public key.
tlspubkeys , err := store . TLSPublicKeyList ( ctx , "" )
if err != nil {
return fmt . Errorf ( "%w: listing tls public keys: %s" , ErrRequest , err )
}
atdom := "@" + domain . Name ( )
for _ , tpk := range tlspubkeys {
if strings . HasSuffix ( tpk . LoginAddress , atdom ) {
return fmt . Errorf ( "%w: domain is still referenced in tls public key by login address %q of account %q, change or remove it first" , ErrRequest , tpk . LoginAddress , tpk . Account )
}
}
2023-01-30 16:27:06 +03:00
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Domains = map [ string ] config . Domain { }
s := domain . Name ( )
for name , d := range c . Domains {
if name != s {
nc . Domains [ name ] = d
}
}
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
// Move away any DKIM private keys to a subdirectory "old". But only if
// they are not in use by other domains.
2024-04-19 11:23:53 +03:00
usedKeyPaths := gatherUsedKeysPaths ( nc )
moveAwayKeys ( log , domConf . DKIM . Selectors , usedKeyPaths )
log . Info ( "domain removed" , slog . Any ( "domain" , domain ) )
return nil
}
func gatherUsedKeysPaths ( nc config . Dynamic ) map [ string ] bool {
2023-01-30 16:27:06 +03:00
usedKeyPaths := map [ string ] bool { }
for _ , dc := range nc . Domains {
for _ , sel := range dc . DKIM . Selectors {
usedKeyPaths [ filepath . Clean ( sel . PrivateKeyFile ) ] = true
}
}
2024-04-19 11:23:53 +03:00
return usedKeyPaths
}
func moveAwayKeys ( log mlog . Log , sels map [ string ] config . Selector , usedKeyPaths map [ string ] bool ) {
for _ , sel := range sels {
2023-01-30 16:27:06 +03:00
if sel . PrivateKeyFile == "" || usedKeyPaths [ filepath . Clean ( sel . PrivateKeyFile ) ] {
continue
}
2024-12-03 00:03:18 +03:00
src := mox . ConfigDirPath ( sel . PrivateKeyFile )
dst := mox . ConfigDirPath ( filepath . Join ( filepath . Dir ( sel . PrivateKeyFile ) , "old" , filepath . Base ( sel . PrivateKeyFile ) ) )
2023-01-30 16:27:06 +03:00
_ , err := os . Stat ( dst )
if err == nil {
err = fmt . Errorf ( "destination already exists" )
} else if os . IsNotExist ( err ) {
os . MkdirAll ( filepath . Dir ( dst ) , 0770 )
err = os . Rename ( src , dst )
}
if err != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "renaming dkim private key file for removed domain" , err , slog . String ( "src" , src ) , slog . String ( "dst" , dst ) )
2023-01-30 16:27:06 +03:00
}
}
}
2024-04-18 12:14:24 +03:00
// DomainSave calls xmodify with a shallow copy of the domain config. xmodify
// can modify the config, but must clone all referencing data it changes.
// xmodify may employ panic-based error handling. After xmodify returns, the
// modified config is verified, saved and takes effect.
2024-04-24 20:15:30 +03:00
func DomainSave ( ctx context . Context , domainName string , xmodify func ( config * config . Domain ) error ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
defer func ( ) {
if rerr != nil {
2024-04-18 12:14:24 +03:00
log . Errorx ( "saving domain config" , rerr )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
2024-12-03 00:03:18 +03:00
nc := mox . Conf . Dynamic // Shallow copy.
2024-04-18 12:14:24 +03:00
dom , ok := nc . Domains [ domainName ] // dom is a shallow copy.
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: domain not present" , ErrRequest )
2024-04-18 12:14:24 +03:00
}
2024-04-24 20:15:30 +03:00
if err := xmodify ( & dom ) ; err != nil {
return err
}
2024-04-18 12:14:24 +03:00
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
2024-04-18 12:14:24 +03:00
nc . Domains = map [ string ] config . Domain { }
2024-12-03 00:03:18 +03:00
for name , d := range mox . Conf . Dynamic . Domains {
2024-04-18 12:14:24 +03:00
nc . Domains [ name ] = d
}
nc . Domains [ domainName ] = dom
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
}
2024-04-18 12:14:24 +03:00
log . Info ( "domain saved" )
return nil
}
// ConfigSave calls xmodify with a shallow copy of the dynamic config. xmodify
// can modify the config, but must clone all referencing data it changes.
// xmodify may employ panic-based error handling. After xmodify returns, the
// modified config is verified, saved and takes effect.
func ConfigSave ( ctx context . Context , xmodify func ( config * config . Dynamic ) ) ( rerr error ) {
log := pkglog . WithContext ( ctx )
defer func ( ) {
if rerr != nil {
log . Errorx ( "saving config" , rerr )
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2024-04-18 12:14:24 +03:00
2024-12-03 00:03:18 +03:00
nc := mox . Conf . Dynamic // Shallow copy.
2024-04-18 12:14:24 +03:00
xmodify ( & nc )
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
}
log . Info ( "config saved" )
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
return nil
}
2023-09-23 13:05:40 +03:00
// AccountAdd adds an account and an initial address and reloads the configuration.
2023-01-30 16:27:06 +03:00
//
2023-03-29 11:55:05 +03:00
// The new account does not have a password, so cannot yet log in. Email can be
2023-01-30 16:27:06 +03:00
// delivered.
2023-03-29 22:11:43 +03:00
//
// Catchall addresses are not supported for AccountAdd. Add separately with AddressAdd.
2023-01-30 16:27:06 +03:00
func AccountAdd ( ctx context . Context , account , address string ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "adding account" , rerr , slog . String ( "account" , account ) , slog . String ( "address" , address ) )
2023-01-30 16:27:06 +03:00
}
} ( )
2023-03-29 11:55:05 +03:00
addr , err := smtp . ParseAddress ( address )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing email address: %v" , ErrRequest , err )
2023-03-29 11:55:05 +03:00
}
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2023-01-30 16:27:06 +03:00
2024-12-03 00:03:18 +03:00
c := mox . Conf . Dynamic
2023-01-30 16:27:06 +03:00
if _ , ok := c . Accounts [ account ] ; ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account already present" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
2023-03-29 11:55:05 +03:00
if err := checkAddressAvailable ( addr ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: address not available: %v" , ErrRequest , err )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Accounts = map [ string ] config . Account { }
for name , a := range c . Accounts {
nc . Accounts [ name ] = a
}
nc . Accounts [ account ] = MakeAccountConfig ( addr )
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
2023-12-05 15:35:58 +03:00
log . Info ( "account added" , slog . String ( "account" , account ) , slog . Any ( "address" , addr ) )
2023-01-30 16:27:06 +03:00
return nil
}
// AccountRemove removes an account and reloads the configuration.
func AccountRemove ( ctx context . Context , account string ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "adding account" , rerr , slog . String ( "account" , account ) )
2023-01-30 16:27:06 +03:00
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2023-01-30 16:27:06 +03:00
2024-12-03 00:03:18 +03:00
c := mox . Conf . Dynamic
2023-01-30 16:27:06 +03:00
if _ , ok := c . Accounts [ account ] ; ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account does not exist" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Accounts = map [ string ] config . Account { }
for name , a := range c . Accounts {
if name != account {
nc . Accounts [ name ] = a
}
}
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
2024-05-09 17:26:08 +03:00
2024-12-03 00:03:18 +03:00
odir := filepath . Join ( mox . DataDirPath ( "accounts" ) , account )
tmpdir := filepath . Join ( mox . DataDirPath ( "tmp" ) , "oldaccount-" + account )
2024-05-09 17:26:08 +03:00
if err := os . Rename ( odir , tmpdir ) ; err != nil {
log . Errorx ( "moving old account data directory out of the way" , err , slog . String ( "account" , account ) )
return fmt . Errorf ( "account removed, but account data directory %q could not be moved out of the way: %v" , odir , err )
}
if err := os . RemoveAll ( tmpdir ) ; err != nil {
log . Errorx ( "removing old account data directory" , err , slog . String ( "account" , account ) )
return fmt . Errorf ( "account removed, its data directory moved to %q, but removing failed: %v" , odir , err )
}
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client. with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.
for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.
for smtp (submission), we don't require an explicit auth command.
both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.
only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.
users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.
the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.
there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
if err := store . TLSPublicKeyRemoveForAccount ( context . Background ( ) , account ) ; err != nil {
log . Errorx ( "removing tls public keys for removed account" , err )
return fmt . Errorf ( "account removed, but removing tls public keys failed: %v" , err )
}
2023-12-05 15:35:58 +03:00
log . Info ( "account removed" , slog . String ( "account" , account ) )
2023-01-30 16:27:06 +03:00
return nil
}
2023-03-29 11:55:05 +03:00
// checkAddressAvailable checks that the address after canonicalization is not
// already configured, and that its localpart does not contain the catchall
// localpart separator.
//
// Must be called with config lock held.
func checkAddressAvailable ( addr smtp . Address ) error {
2024-12-03 00:03:18 +03:00
dc , ok := mox . Conf . Dynamic . Domains [ addr . Domain . Name ( ) ]
2024-04-24 20:15:30 +03:00
if ! ok {
2023-03-29 11:55:05 +03:00
return fmt . Errorf ( "domain does not exist" )
2024-04-24 20:15:30 +03:00
}
2024-12-03 00:03:18 +03:00
lp := mox . CanonicalLocalpart ( addr . Localpart , dc )
if _ , ok := mox . Conf . AccountDestinationsLocked [ smtp . NewAddress ( lp , addr . Domain ) . String ( ) ] ; ok {
2023-03-29 11:55:05 +03:00
return fmt . Errorf ( "canonicalized address %s already configured" , smtp . NewAddress ( lp , addr . Domain ) )
} else if dc . LocalpartCatchallSeparator != "" && strings . Contains ( string ( addr . Localpart ) , dc . LocalpartCatchallSeparator ) {
return fmt . Errorf ( "localpart cannot include domain catchall separator %s" , dc . LocalpartCatchallSeparator )
2024-04-24 20:15:30 +03:00
} else if _ , ok := dc . Aliases [ lp . String ( ) ] ; ok {
return fmt . Errorf ( "address in use as alias" )
2023-03-29 11:55:05 +03:00
}
return nil
}
2023-03-29 22:11:43 +03:00
// AddressAdd adds an email address to an account and reloads the configuration. If
// address starts with an @ it is treated as a catchall address for the domain.
2023-01-30 16:27:06 +03:00
func AddressAdd ( ctx context . Context , address , account string ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "adding address" , rerr , slog . String ( "address" , address ) , slog . String ( "account" , account ) )
2023-01-30 16:27:06 +03:00
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2023-01-30 16:27:06 +03:00
2024-12-03 00:03:18 +03:00
c := mox . Conf . Dynamic
2023-01-30 16:27:06 +03:00
a , ok := c . Accounts [ account ]
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account does not exist" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
2023-03-29 22:11:43 +03:00
var destAddr string
if strings . HasPrefix ( address , "@" ) {
d , err := dns . ParseDomain ( address [ 1 : ] )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing domain: %v" , ErrRequest , err )
2023-03-29 22:11:43 +03:00
}
dname := d . Name ( )
destAddr = "@" + dname
2024-12-03 00:03:18 +03:00
if _ , ok := mox . Conf . Dynamic . Domains [ dname ] ; ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: domain does not exist" , ErrRequest )
2024-12-03 00:03:18 +03:00
} else if _ , ok := mox . Conf . AccountDestinationsLocked [ destAddr ] ; ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: catchall address already configured for domain" , ErrRequest )
2023-03-29 22:11:43 +03:00
}
} else {
addr , err := smtp . ParseAddress ( address )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing email address: %v" , ErrRequest , err )
2023-03-29 22:11:43 +03:00
}
if err := checkAddressAvailable ( addr ) ; err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: address not available: %v" , ErrRequest , err )
2023-03-29 22:11:43 +03:00
}
destAddr = addr . String ( )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Accounts = map [ string ] config . Account { }
for name , a := range c . Accounts {
nc . Accounts [ name ] = a
}
nd := map [ string ] config . Destination { }
for name , d := range a . Destinations {
nd [ name ] = d
}
2023-03-29 22:11:43 +03:00
nd [ destAddr ] = config . Destination { }
2023-01-30 16:27:06 +03:00
a . Destinations = nd
nc . Accounts [ account ] = a
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-01-30 16:27:06 +03:00
}
2023-12-05 15:35:58 +03:00
log . Info ( "address added" , slog . String ( "address" , address ) , slog . String ( "account" , account ) )
2023-01-30 16:27:06 +03:00
return nil
}
// AddressRemove removes an email address and reloads the configuration.
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// Address can be a catchall address for the domain of the form "@<domain>".
2024-04-28 12:44:51 +03:00
//
// If the address is member of an alias, remove it from from the alias, unless it
// is the last member.
2023-01-30 16:27:06 +03:00
func AddressRemove ( ctx context . Context , address string ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-01-30 16:27:06 +03:00
defer func ( ) {
if rerr != nil {
2023-12-05 15:35:58 +03:00
log . Errorx ( "removing address" , rerr , slog . String ( "address" , address ) )
2023-01-30 16:27:06 +03:00
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2023-01-30 16:27:06 +03:00
2024-12-03 00:03:18 +03:00
ad , ok := mox . Conf . AccountDestinationsLocked [ address ]
2023-01-30 16:27:06 +03:00
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: address does not exists" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
2024-12-03 00:03:18 +03:00
a , ok := mox . Conf . Dynamic . Accounts [ ad . Account ]
2023-01-30 16:27:06 +03:00
if ! ok {
return fmt . Errorf ( "internal error: cannot find account" )
}
na := a
na . Destinations = map [ string ] config . Destination { }
var dropped bool
2023-03-29 22:11:43 +03:00
for destAddr , d := range a . Destinations {
if destAddr != address {
na . Destinations [ destAddr ] = d
2023-01-30 16:27:06 +03:00
} else {
dropped = true
}
}
if ! dropped {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: address not removed, likely a postmaster/reporting address" , ErrRequest )
2023-01-30 16:27:06 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// Also remove matching address from FromIDLoginAddresses, composing a new slice.
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client. with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.
for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.
for smtp (submission), we don't require an explicit auth command.
both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.
only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.
users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.
the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.
there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
// Refuse if address is referenced in a TLS public key.
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var dom dns . Domain
var pa smtp . Address // For non-catchall addresses (most).
var err error
if strings . HasPrefix ( address , "@" ) {
dom , err = dns . ParseDomain ( address [ 1 : ] )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing domain for catchall address: %v" , ErrRequest , err )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
} else {
pa , err = smtp . ParseAddress ( address )
if err != nil {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: parsing address: %v" , ErrRequest , err )
2023-02-11 01:47:19 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
dom = pa . Domain
2023-02-11 01:47:19 +03:00
}
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client. with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.
for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.
for smtp (submission), we don't require an explicit auth command.
both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.
only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.
users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.
the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.
there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
dc , ok := mox . Conf . Dynamic . Domains [ dom . Name ( ) ]
if ! ok {
return fmt . Errorf ( "%w: unknown domain in address %q" , ErrRequest , address )
}
var fromIDLoginAddresses [ ] string
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
for i , fa := range a . ParsedFromIDLoginAddresses {
if fa . Domain != dom {
// Keep for different domain.
fromIDLoginAddresses = append ( fromIDLoginAddresses , a . FromIDLoginAddresses [ i ] )
continue
}
if strings . HasPrefix ( address , "@" ) {
continue
}
2024-12-03 00:03:18 +03:00
flp := mox . CanonicalLocalpart ( fa . Localpart , dc )
alp := mox . CanonicalLocalpart ( pa . Localpart , dc )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
if alp != flp {
// Keep for different localpart.
fromIDLoginAddresses = append ( fromIDLoginAddresses , a . FromIDLoginAddresses [ i ] )
}
2023-02-11 01:47:19 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
na . FromIDLoginAddresses = fromIDLoginAddresses
2023-02-11 01:47:19 +03:00
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client. with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.
for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.
for smtp (submission), we don't require an explicit auth command.
both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.
only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.
users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.
the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.
there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
// Refuse if there is still a TLS public key that references this address.
tlspubkeys , err := store . TLSPublicKeyList ( ctx , ad . Account )
if err != nil {
return fmt . Errorf ( "%w: listing tls public keys for account: %v" , ErrRequest , err )
}
for _ , tpk := range tlspubkeys {
a , err := smtp . ParseAddress ( tpk . LoginAddress )
if err != nil {
return fmt . Errorf ( "%w: parsing address from tls public key: %v" , ErrRequest , err )
}
lp := mox . CanonicalLocalpart ( a . Localpart , dc )
ca := smtp . NewAddress ( lp , a . Domain )
if xad , ok := mox . Conf . AccountDestinationsLocked [ ca . String ( ) ] ; ok && xad . Localpart == ad . Localpart {
return fmt . Errorf ( "%w: tls public key %q references this address as login address %q, remove the tls public key before removing the address" , ErrRequest , tpk . Fingerprint , tpk . LoginAddress )
}
}
2024-04-28 12:44:51 +03:00
// And remove as member from aliases configured in domains.
2024-12-03 00:03:18 +03:00
domains := maps . Clone ( mox . Conf . Dynamic . Domains )
2024-04-28 12:44:51 +03:00
for _ , aa := range na . Aliases {
if aa . SubscriptionAddress != address {
continue
}
aliasAddr := fmt . Sprintf ( "%s@%s" , aa . Alias . LocalpartStr , aa . Alias . Domain . Name ( ) )
2024-12-03 00:03:18 +03:00
dom , ok := mox . Conf . Dynamic . Domains [ aa . Alias . Domain . Name ( ) ]
2024-04-28 12:44:51 +03:00
if ! ok {
return fmt . Errorf ( "cannot find domain for alias %s" , aliasAddr )
}
a , ok := dom . Aliases [ aa . Alias . LocalpartStr ]
if ! ok {
return fmt . Errorf ( "cannot find alias %s" , aliasAddr )
}
a . Addresses = slices . Clone ( a . Addresses )
a . Addresses = slices . DeleteFunc ( a . Addresses , func ( v string ) bool { return v == address } )
if len ( a . Addresses ) == 0 {
return fmt . Errorf ( "address is last member of alias %s, add new members or remove alias first" , aliasAddr )
}
a . ParsedAddresses = nil // Filled when parsing config.
dom . Aliases = maps . Clone ( dom . Aliases )
dom . Aliases [ aa . Alias . LocalpartStr ] = a
domains [ aa . Alias . Domain . Name ( ) ] = dom
}
na . Aliases = nil // Filled when parsing config.
2024-12-03 00:03:18 +03:00
nc := mox . Conf . Dynamic
2023-02-11 01:47:19 +03:00
nc . Accounts = map [ string ] config . Account { }
2024-12-03 00:03:18 +03:00
for name , a := range mox . Conf . Dynamic . Accounts {
2023-02-11 01:47:19 +03:00
nc . Accounts [ name ] = a
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
nc . Accounts [ ad . Account ] = na
2024-04-28 12:44:51 +03:00
nc . Domains = domains
2023-02-11 01:47:19 +03:00
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
2024-04-18 12:14:24 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-02-11 01:47:19 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
log . Info ( "address removed" , slog . String ( "address" , address ) , slog . String ( "account" , ad . Account ) )
2023-02-11 01:47:19 +03:00
return nil
}
2024-04-24 20:15:30 +03:00
func AliasAdd ( ctx context . Context , addr smtp . Address , alias config . Alias ) error {
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
if _ , ok := d . Aliases [ addr . Localpart . String ( ) ] ; ok {
return fmt . Errorf ( "%w: alias already present" , ErrRequest )
}
if d . Aliases == nil {
d . Aliases = map [ string ] config . Alias { }
}
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = alias
return nil
} )
}
func AliasUpdate ( ctx context . Context , addr smtp . Address , alias config . Alias ) error {
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
a , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "%w: alias does not exist" , ErrRequest )
}
a . PostPublic = alias . PostPublic
a . ListMembers = alias . ListMembers
a . AllowMsgFrom = alias . AllowMsgFrom
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = a
return nil
} )
}
func AliasRemove ( ctx context . Context , addr smtp . Address ) error {
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
_ , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "%w: alias does not exist" , ErrRequest )
}
d . Aliases = maps . Clone ( d . Aliases )
delete ( d . Aliases , addr . Localpart . String ( ) )
return nil
} )
}
func AliasAddressesAdd ( ctx context . Context , addr smtp . Address , addresses [ ] string ) error {
if len ( addresses ) == 0 {
return fmt . Errorf ( "%w: at least one address required" , ErrRequest )
}
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
alias , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "%w: no such alias" , ErrRequest )
}
alias . Addresses = append ( slices . Clone ( alias . Addresses ) , addresses ... )
alias . ParsedAddresses = nil
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = alias
return nil
} )
}
func AliasAddressesRemove ( ctx context . Context , addr smtp . Address , addresses [ ] string ) error {
if len ( addresses ) == 0 {
return fmt . Errorf ( "%w: need at least one address" , ErrRequest )
}
return DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
alias , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "%w: no such alias" , ErrRequest )
}
alias . Addresses = slices . DeleteFunc ( slices . Clone ( alias . Addresses ) , func ( addr string ) bool {
n := len ( addresses )
addresses = slices . DeleteFunc ( addresses , func ( a string ) bool { return a == addr } )
return n > len ( addresses )
} )
if len ( addresses ) > 0 {
return fmt . Errorf ( "%w: address not found: %s" , ErrRequest , strings . Join ( addresses , ", " ) )
}
alias . ParsedAddresses = nil
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = alias
return nil
} )
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// AccountSave updates the configuration of an account. Function xmodify is called
// with a shallow copy of the current configuration of the account. It must not
// change referencing fields (e.g. existing slice/map/pointer), they may still be
// in use, and the change may be rolled back. Referencing values must be copied and
// replaced by the modify. The function may raise a panic for error handling.
func AccountSave ( ctx context . Context , account string , xmodify func ( acc * config . Account ) ) ( rerr error ) {
2023-12-05 15:35:58 +03:00
log := pkglog . WithContext ( ctx )
2023-03-28 21:50:36 +03:00
defer func ( ) {
if rerr != nil {
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
log . Errorx ( "saving account fields" , rerr , slog . String ( "account" , account ) )
2023-03-28 21:50:36 +03:00
}
} ( )
2024-12-03 00:03:18 +03:00
defer mox . Conf . DynamicLockUnlock ( ) ( )
2023-03-28 21:50:36 +03:00
2024-12-03 00:03:18 +03:00
c := mox . Conf . Dynamic
2023-03-28 21:50:36 +03:00
acc , ok := c . Accounts [ account ]
if ! ok {
2024-04-19 11:23:53 +03:00
return fmt . Errorf ( "%w: account not present" , ErrRequest )
2023-03-28 21:50:36 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
xmodify ( & acc )
2023-03-28 21:50:36 +03:00
// Compose new config without modifying existing data structures. If we fail, we
// leave no trace.
nc := c
nc . Accounts = map [ string ] config . Account { }
for name , a := range c . Accounts {
nc . Accounts [ name ] = a
}
nc . Accounts [ account ] = acc
2024-12-03 00:03:18 +03:00
if err := mox . WriteDynamicLocked ( ctx , log , nc ) ; err != nil {
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
return fmt . Errorf ( "writing domains.conf: %w" , err )
2023-03-28 21:50:36 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
log . Info ( "account fields saved" , slog . String ( "account" , account ) )
2023-03-28 21:50:36 +03:00
return nil
}