so mail user agents will show DSNs threaded/grouped with the original message.
we store the MessageID in the message queue, so we have the value within reach
when we need it.
i saw a references header in a DSN from gmail on a test account. makes sense to me.
when broadcasting a change, we would try to send the changes on a channel,
non-blocking. if we couldn't send (because there was no pending blocked
receive), we would wait until the potential receiver would explicitly request
the changes. however, the imap idle handler would not explicitly request the
changes, but do a receive on the changes channel. since there was no pending
blocked send on the channel, that receive would block. only when another event
would come in, would both the pending and the new changes be sent.
we now use a channel only for signaling there are pending changes. the channel
is buffered, so when broadcasting we can just set the signal by a non-blocking
send and continue with the next listener. the receiver will get the buffered
signal. it can then get the changes directly, but lock-protected.
found when looking at a missing/delayed new message notification in thunderbird
when two messages arrive immediately after each other. this doesn't fix that
problem though: it seems thunderbird just ignores imap untagged "exists"
messages (indicating a new message arrived) during the "uid fetch" command that
it issued after notifications from an "idle" command.
and add a bit more logging for unexpected failures when closing files.
and make tests pass with a TMPDIR on a different filesystem than the testdata directory.
the import was still processed, but the SSE connection to fetch progress did
not work since adding the loggingWriter.
found while working on other functionality that uses SSE.
the trailing slash is commonly forgotten. in the default setup, for the admin
endpoint, this makes you end up at the account endpoint, which won't accept
your admin credentials. with this change, users won't get confused by that
anymore.
for issue #43
the mailbox select/examine responses now return all flags used in a mailbox in
the FLAGS response. and indicate in the PERMANENTFLAGS response that clients
can set new keywords. we store these values on the new Message.Keywords field.
system/well-known flags are still in Message.Flags, so we're recognizing those
and handling them separately.
the imap store command handles the new flags. as does the append command, and
the search command.
we store keywords in a mailbox when a message in that mailbox gets the keyword.
we don't automatically remove the keywords from a mailbox. there is currently
no way at all to remove a keyword from a mailbox.
the import commands now handle non-system/well-known keywords too, when
importing from mbox/maildir.
jmap requires keyword support, so best to get it out of the way now.
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
so external tools (like fail2ban) can monitor the logs and block ip's of bots.
for issue #30 by inigoserna, though i'm not sure i interpreted the suggestion correctly.
if we recognize that a request for a WebForward is trying to turn the
connection into a websocket, we forward it to the backend and check if the
backend understands the websocket request. if so, we pass back the upgrade
response and get out of the way, copying bytes between the two. we do log the
total amount of bytes read from the client and written to the client. if the
backend doesn't respond with a websocke response, or an invalid one, we respond
with a regular non-websocket response. and we log details about the failed
connection, should help with debugging and any bug reports.
we don't try to parse the websocket framing, that's between the client and the
backend. we could try to parse it, in part to protect the backend from bad
frames, but it would be a lot of work and could be brittle in the face of
extensions.
this doesn't yet handle websocket connections when a http proxy is configured.
we'll implement it when someone needs it. we do recognize it and fail the
connection.
for issue #25
by specifying a "destination" in an account that is just "@" followed by the
domain, e.g. "@example.org". messages are only delivered to the catchall
address when no regular destination matches (taking the per-domain
catchall-separator and case-sensisitivity into account).
for issue #18
by default 1000 messages per day, and to max 200 first-time receivers.
i don't think a person would reach those limits. a compromised account abused
by spammers could easily reach that limit. this prevents further damage.
the error message you will get is quite clear, pointing to the configuration
parameter that should be changed.
at the end of the quickstart. also hint at it during startup, when printing the
listener. and mention it in the FAQ.
another recent commit make the admin and account http path configurable, and
that expanded the config docs with a mention of the default path.
based on feedback from stroyselmash in issue #20, thanks!
so you can use the host (domain) name of the mail server for serving other
resources too. the default is is still that account is served on /, and so
takes all incoming requests before giving webhandlers a chance.
mox localserve now serves the account pages on /account/
we already do acme tls-alpn-01 validation, and still require it (we could relax
this at some point). http-01 is easy to add.
the bug was that the list of acme managers and hosts to refresh was overwritten
by another listener. the listeners are a map, and we range over it, so the
order we handle them is random. if the public listener was handled first, and
an internal handler later, the list was reset again.
current behaviour isn't intuitive. it's not great to have to attempt parsing
the strings as both localpart and email address. so we deprecate the
localpart-only behaviour. when we load the config file, and it has
localpart-only Destinations keys, we'll change them to full addresses in
memory. when an admin causes a write of domains.conf, it'll automatically be
fixed. we log an error with a deprecated notice for each localpart-only
destinations key.
sometime in the future, we can remove the old localpart-only destination
support. will be in the release notes then.
also start keeping track of update notes that need to make it in the release
notes of the next release.
for issue #18
the idea is to make it clear from the logging if non-ascii characters are used.
this is implemented by making mlog recognize if a field value that will be
logged has a LogString method. if so, that value is logged. dns.Domain,
smtp.Address, smtp.Localpart, smtp.Path now have a LogString method.
some explicit calls to String have been replaced to LogString, and some %q
formatting have been replaced with %s, because the escaped localpart would
already have double quotes, and double doublequotes aren't easy to read.
you can already get most http to https redirects through DontRedirectPlainHTTP
in WebHandler, but that needs handlers for all paths.
now you can just set up a redirect for a domain and all its path to baseurl
https://domain (leaving other webdirect fields empty). when the request comes
in with plain http, the redirect to https is done. that next request will also
evaluate the same redirect rule. but it will not cause a match because it would
redirect to the same scheme,host,path. so next webhandlers get a chance to
serve.
also clarify in webhandlers docs that also account & admin built-in handlers
run first.
related to issue #16
- make it easier to run with an existing webserver. the quickstart now has a new option for that, it generates a different mox.conf, and further instructions such as configuring the tls keys/certs and reverse proxy urls. and changes to make autoconfig work in that case too.
- when starting up, request a tls cert for the hostname and for the autoconfig endpoint. the first will be requested soon anyway, and the autoconfig cert is needed early so the first autoconfig request doesn't time out (without helpful message to the user by at least thunderbird). and don't request the certificate before the servers are online. the root process was now requesting the certs, before the child process was serving on the tls port.
- add examples of configs generated by the quickstart.
- enable debug logging in config from quickstart, to give user more info.
for issue #5
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
- serve static files, serving index.html or optionally listings for directories
- redirects
- reverse-proxy, forwarding requests to a backend
these are configurable through the config file. a domain and path regexp have to
be configured. path prefixes can be stripped. configured domains are added to
the autotls allowlist, so acme automatically fetches certificates for them.
all webserver requests now have (access) logging, metrics, rate limiting.
on http errors, the error message prints an encrypted cid for relating with log files.
this also adds a new mechanism for example config files.
makes it easier to run on bsd's, where you cannot (easily?) let non-root users
bind to ports <1024. starting as root also paves the way for future improvements
with privilege separation.
unfortunately, this requires changes to how you start mox. though mox will help
by automatically fix up dir/file permissions/ownership.
if you start mox from the systemd unit file, you should update it so it starts
as root and adds a few additional capabilities:
# first update the mox binary, then, as root:
./mox config printservice >mox.service
systemctl daemon-reload
systemctl restart mox
journalctl -f -u mox &
# you should see mox start up, with messages about fixing permissions on dirs/files.
if you used the recommended config/ and data/ directory, in a directory just for
mox, and with the mox user called "mox", this should be enough.
if you don't want mox to modify dir/file permissions, set "NoFixPermissions:
true" in mox.conf.
if you named the mox user something else than mox, e.g. "_mox", add "User: _mox"
to mox.conf.
if you created a shared service user as originally suggested, you may want to
get rid of that as it is no longer useful and may get in the way. e.g. if you
had /home/service/mox with a "service" user, that service user can no longer
access any files: only mox and root can.
this also adds scripts for building mox docker images for alpine-supported
platforms.
the "restart" subcommand has been removed. it wasn't all that useful and got in
the way.
and another change: when adding a domain while mtasts isn't enabled, don't add
the per-domain mtasts config, as it would cause failure to add the domain.
based on report from setting up mox on openbsd from mteege.
and based on issue #3. thanks for the feedback!
so you can run mox on openbsd with port redirects in pf.conf.
in the future, starting as root, binding the sockets, and passing the bound
sockets to a new unprivileged process should be implemented, but this should
get openbsd users going.
from discussion with mteege
so users can easily take their email out of somewhere else, and import it into mox.
this goes a little way to give feedback as the import progresses: upload
progress is shown (surprisingly, browsers aren't doing this...), imported
mailboxes/messages are counted (batched) and import issues/warnings are
displayed, all sent over an SSE connection. an import token is stored in
sessionstorage. if you reload the page (e.g. after a connection error), the
browser will reconnect to the running import and show its progress again. and
you can just abort the import before it is finished and committed, and nothing
will have changed.
this also imports flags/keywords from mbox files.