before this change, when archiving, we would move all messages from the thread
that are in the same mailbox as that of the response message to the archive
mailbox. so if the message that was being responsed to was already in the
archive mailbox, the message would be moved from archive mailbox to archive
mailbox, resulting in an error.
with this change, when archiving, we move the thread messages that are in the
same mailbox as is currently open (independent of the mailbox the message lives
in, a common situation in the threading view). if there is no open mailbox
(search results), we still use the mailbox of the message being responded to as
reference.
with this new approach, we won't get errors moving a message to an archive
mailbox when it's already there. well, you can still get that error, but then
you've got the archive mailbox open, or you're in a search result and
responding to an archived message. the error should at least help understand
that nothing is happening.
we are only moving the messages from one active/reference mailbox because we
don't want to move messages from the thread that are in the Sent mailbox, and
we also don't want to move duplicate messages (cross-posts to mailing lists)
that are in other mailboxes. moving only the messages from the current active
mailbox seems safe, and should do what is what users would expect most of the
time.
for issue #233 by mattfbacon, thanks for reporting!
these singleusetokens can be redeemed once. so when you see it in the logs, it
can't be used again. they are short-lived anyway.
this change should help prevent me periodically investigating token handling...
if the message has a list-id header, we assume this is a (mailing) list
message, and we require a dkim/spf-verified domain (we prefer the shortest that
is a suffix of the list-id value). the rule we would add will mark such
messages as from a mailing list, changing filtering rules on incoming messages
(not enforcing dmarc policies). messages will be matched on list-id header and
will only match if they have the same dkim/spf-verified domain.
if the message doesn't have a list-id header, we'll ask to match based on
"message from" address.
we don't ask the user in several cases:
- if the destination/source mailbox is a special-use mailbox (e.g.
trash,archive,sent,junk; inbox isn't included)
- if the rule already exist (no point in adding it again).
- if the user said "no, not for this list-id/from-address" in the past.
- if the user said "no, not for messages moved to this mailbox" in the past.
we'll add the rule if the message was moved out of the inbox.
if the message was moved to the inbox, we check if there is a matching rule
that we can remove.
we now remember the "no" answers (for list-id, msg-from-addr and mailbox) in
the account database.
to implement the msgfrom rules, this adds support to rulesets for matching on
message "from" address. before, we could match on smtp from address (and other
fields). rulesets now also have a field for comments. webmail adds a note that
it created the rule, with the date.
manual editing of the rulesets is still in the webaccount page. this webmail
functionality is just a convenient way to add/remove common rules.
the regular send shortcut is control+Enter. the shift enables "archive thread".
there is no configuration option, you'll always get the button, but only for
reply/forward, not for new compose.
we may do "send and move thread to thrash", but let's wait until people want it.
for github issue #135 by mattfbacon
when sending a message with bcc's, prepend the bcc header to the message we
store in the sent folder. still not in the message we send to the recipients.
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
both when parsing our configs, and for incoming on smtp or in messages.
so we properly compare things like é and e+accent as equal, and accept the
different encodings of that same address.
the smtp extension, rfc 4865.
also implement in the webmail.
the queueing/delivery part hardly required changes: we just set the first
delivery time in the future instead of immediately.
still have to find the first client that implements it.
to start composing a message.
the help popup now has a button to register the "mailto:" links with the mox
webmail (typically only works over https, not all browsers support it).
the mailto links are specified in 6068. we support the to/cc/bcc/subject/body
parameters. other parameters should be seen as custom headers, but we don't
support messages with custom headers at all at the moment, so we ignore them.
we now also turn text of the form "mailto:user@host" into a clickable link
(will not be too common). we could be recognizing any "x@x.x" as email address
and make them clickable in the future.
thanks to Hans-Jörg for explaining this functionality.
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
with requiretls, the tls verification mode/rules for email deliveries can be
changed by the sender/submitter. in two ways:
1. "requiretls" smtp extension to always enforce verified tls (with mta-sts or
dnssec+dane), along the entire delivery path until delivery into the final
destination mailbox (so entire transport is verified-tls-protected).
2. "tls-required: no" message header, to ignore any tls and tls verification
errors even if the recipient domain has a policy that requires tls verification
(mta-sts and/or dnssec+dane), allowing delivery of non-sensitive messages in
case of misconfiguration/interoperability issues (at least useful for sending
tls reports).
we enable requiretls by default (only when tls is active), for smtp and
submission. it can be disabled through the config.
for each delivery attempt, we now store (per recipient domain, in the account
of the sender) whether the smtp server supports starttls and requiretls. this
support is shown (after having sent a first message) in the webmail when
sending a message (the previous 3 bars under the address input field are now 5
bars, the first for starttls support, the last for requiretls support). when
all recipient domains for a message are known to implement requiretls,
requiretls is automatically selected for sending (instead of "default" tls
behaviour). users can also select the "fallback to insecure" to add the
"tls-required: no" header.
new metrics are added for insight into requiretls errors and (some, not yet
all) cases where tls-required-no ignored a tls/verification error.
the admin can change the requiretls status for messages in the queue. so with
default delivery attempts, when verified tls is required by failing, an admin
could potentially change the field to "tls-required: no"-behaviour.
messages received (over smtp) with the requiretls option, get a comment added
to their Received header line, just before "id", after "with".
the bar is currently showing 3 properties:
1. mta-sts enforced;
2. mx lookup returned dnssec-signed response;
3. first delivery destination host has dane records
the colors are: red for not-implemented, green for implemented, gray for error,
nothing for unknown/irrelevant.
the plan is to implement "requiretls" soon and start caching per domain whether
delivery can be done with starttls and whether the domain supports requiretls.
and show that in two new parts of the bar.
thanks to damian poddebniak for pointing out that security indicators should
always be visible, not only for positive/negative result. otherwise users won't
notice their absence.
we match messages to their parents based on the "references" and "in-reply-to"
headers (requiring the same base subject), and in absense of those headers we
also by only base subject (against messages received max 4 weeks ago).
we store a threadid with messages. all messages in a thread have the same
threadid. messages also have a "thread parent ids", which holds all id's of
parent messages up to the thread root. then there is "thread missing link",
which is set when a referenced immediate parent wasn't found (but possibly
earlier ancestors can still be found and will be in thread parent ids".
threads can be muted: newly delivered messages are automatically marked as
read/seen. threads can be marked as collapsed: if set, the webmail collapses
the thread to a single item in the basic threading view (default is to expand
threads). the muted and collapsed fields are copied from their parent on
message delivery.
the threading is implemented in the webmail. the non-threading mode still works
as before. the new default threading mode "unread" automatically expands only
the threads with at least one unread (not seen) meessage. the basic threading
mode "on" expands all threads except when explicitly collapsed (as saved in the
thread collapsed field). new shortcuts for navigation/interaction threads have
been added, e.g. go to previous/next thread root, toggle collapse/expand of
thread (or double click), toggle mute of thread. some previous shortcuts have
changed, see the help for details.
the message threading are added with an explicit account upgrade step,
automatically started when an account is opened. the upgrade is done in the
background because it will take too long for large mailboxes to block account
operations. the upgrade takes two steps: 1. updating all message records in the
database to add a normalized message-id and thread base subject (with "re:",
"fwd:" and several other schemes stripped). 2. going through all messages in
the database again, reading the "references" and "in-reply-to" headers from
disk, and matching against their parents. this second step is also done at the
end of each import of mbox/maildir mailboxes. new deliveries are matched
immediately against other existing messages, currently no attempt is made to
rematch previously delivered messages (which could be useful for related
messages being delivered out of order).
the threading is not yet exposed over imap.
soon, we can have multiple rejects mailboxes. and checking against the
configured rejects mailbox name wasn't foolproof to begin with, because it may
have changed between delivery to the rejects mailbox and the message being
moved.
after upgrading, messages currently in rejects mailboxes don't have IsReject
set, so they don't get the special rejecs treatment when being moved. they are
removed from the rejects mailbox after some time though, and newly added
rejects will be treated correctly. so this means some existing messages wrongly
delivered to the rejects mailbox, and moved out, aren't used (for a positive
signal) for future deliveries. saves a bit of complexity in the
implementation. i think the tradeoff is worth it.
related to discussion in issue #50
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.