the trailing slash is commonly forgotten. in the default setup, for the admin
endpoint, this makes you end up at the account endpoint, which won't accept
your admin credentials. with this change, users won't get confused by that
anymore.
for issue #43
this doesn't really test the output of the ctl commands, just that they succeed
without error. better than nothing...
testing found two small bugs, that are not an issue in practice:
1. we were ack'ing streamed data from the other side of the ctl connection
before having read it. when there is no buffer space on the connection (always
the case for net.Pipe) that would cause a deadlock. only actually happened
during the new tests.
2. the generated dkim keys are relatively to the directory of the dynamic
config file. mox looked it up relative to the directory of the _static_ config
file at startup. this directory is typicaly the same. users would have noticed
if they had triggered this.
so mox doesn't have to be running when you run it.
will be useful for testing in the near future.
this also moves cpuprof and memprof cli flags to top-level flag parsing, so all
commands can use them.
we could make more types of delays configurable. the current approach isn't
great, as it results in an a default value of "0s" in the config file, while
the actual default is 15s (which is documented just above, but still).
renaming inbox is special. the mailbox isn't renamed, but its messages moved to
a new mailbox. we weren't updating the destination mailbox uidnext with the new
messages. the fix not only sets the uidnext correctly, but also renumbers the
uids, starting at 1.
this also adds a consistency check for message uids and mailbox uidnexts, and
for mailbox uidvalidity account nextuidvalidity in "mox verifydata".
this also adds command "mox fixuidmeta" (not listed) that fixes up mailbox uidnext
and account uidvalidity. and command "mox reassignuids" that will renumber the
uids for either one or all mailboxes in an account.
for a uid set, the syntax <num>:* must be interpreted as <num>:<maxuid>. a
wrong check turned the uid set into <maxuid>:<maxuid>. that check was meant for
the case where <num> is higher than <maxuid>, in which case num must be
replaced with maxuid.
this affected "uid expunge" with a uid set, possibly causing messages marked
for deletion not to be actually removed, and this affected "search" with the
uid parameter, possibly not returning all messages that were searched for.
found while writing tests for upcoming condstore/qresync extensions.
these could cause the parser to reject correct commands.
the first bug is about the allowed chars for an "atom", we were accepting too
many. this probably isn't easily triggered in practice.
the second bug is about how numbers (digits) are parsed. when gathering digits
to parse as number, we didn't consider only the directly upcoming digits that
make up the number, but continued looking for digits later on in the command.
then we tried to parse a string that was too long as a number, which would fail
because of additional characters. this could have been triggered with commands
containing two numbers. this is possible with e.g. "tag search or larger 123
smaller 123", the "or" takes two search keys again, each with a number. not too
common, but can happen.
found while writing tests for upcoming condstore/qresync implementation.
the mailbox select/examine responses now return all flags used in a mailbox in
the FLAGS response. and indicate in the PERMANENTFLAGS response that clients
can set new keywords. we store these values on the new Message.Keywords field.
system/well-known flags are still in Message.Flags, so we're recognizing those
and handling them separately.
the imap store command handles the new flags. as does the append command, and
the search command.
we store keywords in a mailbox when a message in that mailbox gets the keyword.
we don't automatically remove the keywords from a mailbox. there is currently
no way at all to remove a keyword from a mailbox.
the import commands now handle non-system/well-known keywords too, when
importing from mbox/maildir.
jmap requires keyword support, so best to get it out of the way now.
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
it seems linux machines with systemd-resolved don't always set up
/etc/resolv.conf correctly. there may be no "nameserver" entry, causing Go's
net resolver to fallback to 127.0.0.1 and ::1. Systemd-resolved is listening on
127.0.0.53, so users will likely get a "connection refused". So point users to
the systemd-resolved manual page.
for issue #38 by ArnoSen
with tls with acme (with pebble, a small acme server for testing), and with
pregenerated keys/certs.
the two mox instances are configured on their own domain. we launch a separate
test container that connects to the first, submits a message for delivery to
the second. we check if the message is delivered with an imap connection and
the idle command.
we were adding the missing date and/or message-id header, but didn't sign it.
and the default dkim signing config is to (over)sign those headers. so that was
causing errors with bad signatures.
found while setting up automated tests for quickstart, while sending a very
basic message between a fresh install.
i wondered why self-signed mtasts certs didn't result in delivery failure. it's
because it was a first-time request of the mtasts policy (clean test
container). and for that case it means mtasts should be ignored.
so external tools (like fail2ban) can monitor the logs and block ip's of bots.
for issue #30 by inigoserna, though i'm not sure i interpreted the suggestion correctly.
makes it easier to use tls keys/certs managed by other tools, with or without
acme. the root process has access to open such files. the child process reads
the key from the file descriptor, then closes the file.
for issue #30 by inigoserna, thanks!
someone asked at the the recent golang rotterdam meetup if this would be added.
i looked into it, and it requires implementing an imap extension
XAPPLEPUSHSERVICE (not documented, but apple published modified dovecot
software for macos server that implemented it). to send push notifications to
the ios mail app, you need a APNS certificate. the tutorials online explain you
have to purchase macos server (a deprecated product) and extract the APNS
certificate. the certificate is valid for one year. i'm not sure it still
works, and it feels like it could stop working at any moment. but implementing
it seems doable.
if we recognize that a request for a WebForward is trying to turn the
connection into a websocket, we forward it to the backend and check if the
backend understands the websocket request. if so, we pass back the upgrade
response and get out of the way, copying bytes between the two. we do log the
total amount of bytes read from the client and written to the client. if the
backend doesn't respond with a websocke response, or an invalid one, we respond
with a regular non-websocket response. and we log details about the failed
connection, should help with debugging and any bug reports.
we don't try to parse the websocket framing, that's between the client and the
backend. we could try to parse it, in part to protect the backend from bad
frames, but it would be a lot of work and could be brittle in the face of
extensions.
this doesn't yet handle websocket connections when a http proxy is configured.
we'll implement it when someone needs it. we do recognize it and fail the
connection.
for issue #25
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.