the field is optional. if absent, the default behaviour is currently to disable
session tickets. users can set the option if they want to try if delivery from
microsoft is working again. in a future version, we can switch the default to
enabling session tickets.
the previous fix was to disable session tickets for all tls connections,
including https. that was a bit much.
for issue #237
instead of failing the connection because no certificates are available.
this may improve interoperability. perhaps the remote smtp client that's doing
the delivery will decide they do like the tls cert for our (mx) hostname after
all.
this only applies to incoming smtp deliveries. for other tls connections
(https, imaps/submissions and imap/submission with starttls) we still cause
connections for unknown sni hostnames to fail. if case no sni was present, we
were already falling back to a cert for the (listener/mx) hostname, that
behaviour hasn't changed.
for issue #206 by RobSlgm
per listener, you could enable the admin/account/webmail/webapi handlers. but
that would serve those services on their configured paths (/admin/, /,
/webmail/, /webapi/) on all domains mox would be webserving, including any
non-mail domains. so your www.example/admin/ would be serving the admin web
interface, with no way to disabled that.
with this change, the admin interface is only served on requests to (based on
Host header):
- ip addresses
- the listener host name (explicitly configured in the listener, with fallback
to global hostname)
- "localhost" (for ssh tunnel/forwarding scenario's)
the account/webmail/webapi interfaces are served on the same domains as the
admin interface, and additionally:
- the client settings domains, as optionally configured in each Domain in
domains.conf. typically "mail.<yourdomain>".
this means the internal services are no longer served on other domains
configured in the webserver, e.g. www.example.org/admin/ will not be handled
specially.
the order of evaluation of routes/services is also changed:
before this change, the internal handlers would always be evaluated first.
with this change, only the system handlers for
MTA-STS/autoconfig/ACME-validation will be evaluated first. then the webserver
handlers. and finally the internal services (admin/account/webmail/webapi).
this allows an admin to configure overrides for some of the domains (per
hostname-matching rules explained above) that would normally serve these
services.
webserver handlers can now be configured that pass the request to an internal
service: in addition to the existing static/redirect/forward config options,
there is now an "internal" config option, naming the service
(admin/account/webmail/webapi) for handling the request. this allows enabling
the internal services on custom domains.
for issue #160 by TragicLifeHu, thanks for reporting!
it's the responsibility of the sender to use unique fromid's.
we do check if that's the case, and return an error if not.
also make it more clear that "unique smtp mail from addresses" map to the
"FromIDLoginAddresses" account config field.
based on feedback from cuu508 for #31, thanks!
the members must currently all be addresses of local accounts.
a message sent to an alias is accepted if at least one of the members accepts
it. if no members accepts it (e.g. due to bad reputation of sender), the
message is rejected.
if a message is submitted to both an alias addresses and to recipients that are
members of the alias in an smtp transaction, the message will be delivered to
such members only once. the same applies if the address in the message
from-header is the address of a member: that member won't receive the message
(they sent it). this prevents duplicate messages.
aliases have three configuration options:
- PostPublic: whether anyone can send through the alias, or only members.
members-only lists can be useful inside organizations for internal
communication. public lists can be useful for support addresses.
- ListMembers: whether members can see the addresses of other members. this can
be seen in the account web interface. in the future, we could export this in
other ways, so clients can expand the list.
- AllowMsgFrom: whether messages can be sent through the alias with the alias
address used in the message from-header. the webmail knows it can use that
address, and will use it as from-address when replying to a message sent to
that address.
ideas for the future:
- allow external addresses as members. still with some restrictions, such as
requiring a valid dkim-signature so delivery has a chance to succeed. will
also need configuration of an admin that can receive any bounces.
- allow specifying specific members who can sent through the list (instead of
all members).
for github issue #57 by hmfaysal.
also relevant for #99 by naturalethic.
thanks to damir & marin from sartura for discussing requirements/features.
if the message has a list-id header, we assume this is a (mailing) list
message, and we require a dkim/spf-verified domain (we prefer the shortest that
is a suffix of the list-id value). the rule we would add will mark such
messages as from a mailing list, changing filtering rules on incoming messages
(not enforcing dmarc policies). messages will be matched on list-id header and
will only match if they have the same dkim/spf-verified domain.
if the message doesn't have a list-id header, we'll ask to match based on
"message from" address.
we don't ask the user in several cases:
- if the destination/source mailbox is a special-use mailbox (e.g.
trash,archive,sent,junk; inbox isn't included)
- if the rule already exist (no point in adding it again).
- if the user said "no, not for this list-id/from-address" in the past.
- if the user said "no, not for messages moved to this mailbox" in the past.
we'll add the rule if the message was moved out of the inbox.
if the message was moved to the inbox, we check if there is a matching rule
that we can remove.
we now remember the "no" answers (for list-id, msg-from-addr and mailbox) in
the account database.
to implement the msgfrom rules, this adds support to rulesets for matching on
message "from" address. before, we could match on smtp from address (and other
fields). rulesets now also have a field for comments. webmail adds a note that
it created the rule, with the date.
manual editing of the rulesets is still in the webaccount page. this webmail
functionality is just a convenient way to add/remove common rules.
for dmarc reporting address, tls reporting address, mtasts policy, dkim keys/selectors.
should make it easier for webadmin-using admins to discover these settings.
the webadmin interface is now on par with functionality you would set through
the configuration file, let's keep it that way.
this simplifies some of the code that makes modifications to the config file. a
few protected functions can make changes to the dynamic config, which webadmin
can use. instead of having separate functions in mox-/admin.go for each type of
change.
this also exports the parsed full dynamic config to webadmin, so we need fewer
functions for specific config fields too.
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
typescript now knows the full types, not just "any" for account config.
inline structs previously in config.Account are given their own type definition
so sherpa can generate types.
also update to latest sherpa lib that knows about time.Duration, to be used soon.
The `TransportDirect` transport allows to tweak outgoing SMTP
connections to remote servers. Currently, it only allows to select
network IP family (ipv4, ipv6 or both).
For example, to disable ipv6 for all outgoing SMTP connections:
- add these lines in mox.conf to create a new transport named
"disableipv6":
```
Transports:
disableipv6:
Direct:
DisableIpv6: true
```
- then add these lines in domains.conf to use this transport:
```
Routes:
-
Transport: disableipv6
```
fix#149
preventing writing out a domains.conf that is invalid and can't be parsed
again. this happens when the last address was removed from an account. just a
click in the admin web interface.
accounts without email address cannot log in.
for issue #133 by ally9335
by explaining (in the titles/hovers) what the concepts and requirements are, by
using selects/dropdowns or datalist suggestions where we have a known list, by
automatically suggesting a good account name, and putting the input fields in a
more sensible order.
based on issue #132 by ally9335
so you can still know when someone has put you on their blocklist (which may
affect delivery), without using them.
also query dnsbls for our ips more often when we do more outgoing connections
for delivery: once every 100 messages, but at least 5 mins and at most 3 hours
since the previous check.
you can configure a domain only to accept dmarc/tls reports. those domains
won't have addresses for that domain configured (the reporting destination
address is for another domain). we already handled such domains specially in a
few places. but we were considering ourselves authoritative for such domains if
an smtp client would send a message to the domain during submit. and we would
reject all recipient addresses. but we should be trying to deliver those
messages to the actual mx hosts for the domain, which we will now do.
most content is in markdown files in website/, some is taken out of the repo
README and rfc/index.txt. a Go file generates html. static files are kept in a
separate repo due to size.
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
all ui frontend code is now in typescript. we no longer need jshint, and we
build the frontend code during "make build".
this also changes tlsrpt types for a Report, not encoding field names with
dashes, but to keep them valid identifiers in javascript. this makes it more
conveniently to work with in the frontend, and works around a sherpats
limitation.
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.
this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.
if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
to get the security benefits (detecting mitm attempts), explicitly configure
clients to use a scram plus variant, e.g. scram-sha-256-plus. unfortunately,
not many clients support it yet.
imapserver scram plus support seems to work with the latest imtest (imap test
client) from cyrus-sasl. no success yet with mutt (with gsasl) though.
should prevent potential mitm attacks. especially when done close to the
machine itself (where a http/tls challenge is intercepted to get a valid
certificate), as seen on the internet last month.
so a single user cannot fill up the disk.
by default, there is (still) no limit. a default can be set in the config file
for all accounts, and a per-account max size can be set that would override any
global setting.
this does not take into account disk usage of the index database. and also not
of any file system overhead.
based on discussion on uta mailing list. it seems the intention of the tlsrpt
is to only send reports to recipient domains. but i was able to interpret the
tlsrpt rfc as sending reports to mx hosts too ("policy domain", and because it
makes sense given how DANE works per MX host, not recipient domain). this
change makes the behaviour of outgoing reports to recipient domains work more
in line with expectations most folks may have about tls reporting (i.e. also
include per-mx host tlsa policies in the report). this also keeps reports to mx
hosts working, and makes them more useful by including the recipient domains of
affected deliveries.
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.
sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.
only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.
config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.
gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
in smtpserver, we store dmarc evaluations (under the right conditions).
in dmarcdb, we periodically (hourly) send dmarc reports if there are
evaluations. for failed deliveries, we deliver the dsn quietly to a submailbox
of the postmaster mailbox.
this is on by default, but can be disabled in mox.conf.
with requiretls, the tls verification mode/rules for email deliveries can be
changed by the sender/submitter. in two ways:
1. "requiretls" smtp extension to always enforce verified tls (with mta-sts or
dnssec+dane), along the entire delivery path until delivery into the final
destination mailbox (so entire transport is verified-tls-protected).
2. "tls-required: no" message header, to ignore any tls and tls verification
errors even if the recipient domain has a policy that requires tls verification
(mta-sts and/or dnssec+dane), allowing delivery of non-sensitive messages in
case of misconfiguration/interoperability issues (at least useful for sending
tls reports).
we enable requiretls by default (only when tls is active), for smtp and
submission. it can be disabled through the config.
for each delivery attempt, we now store (per recipient domain, in the account
of the sender) whether the smtp server supports starttls and requiretls. this
support is shown (after having sent a first message) in the webmail when
sending a message (the previous 3 bars under the address input field are now 5
bars, the first for starttls support, the last for requiretls support). when
all recipient domains for a message are known to implement requiretls,
requiretls is automatically selected for sending (instead of "default" tls
behaviour). users can also select the "fallback to insecure" to add the
"tls-required: no" header.
new metrics are added for insight into requiretls errors and (some, not yet
all) cases where tls-required-no ignored a tls/verification error.
the admin can change the requiretls status for messages in the queue. so with
default delivery attempts, when verified tls is required by failing, an admin
could potentially change the field to "tls-required: no"-behaviour.
messages received (over smtp) with the requiretls option, get a comment added
to their Received header line, just before "id", after "with".
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
to accept reports for another domain, first add that domain to the config,
leaving all options empty except DMARC/TLSRPT in which you configure a Domain.
the suggested DNS DMARC/TLSRPT records will show the email address with
configured domain. for DMARC, the dnscheck functionality will verify that the
destination domain has opted in to receiving reports.
there is a new command-line subcommand "mox dmarc checkreportaddrs" that
verifies if dmarc reporting destination addresses have opted in to received
reports.
this also changes the suggested dns records (in quickstart, and through admin
pages and cli subcommand) to take into account whether DMARC and TLSRPT is
configured, and with which localpart/domain (previously it always printed
records as if reporting was enabled for the domain). and when generating the
suggested DNS records, the dmarc.Record and tlsrpt.Record code is used, with
proper uri-escaping.
we only compress if applicable (content-type indicates likely compressible),
client supports it, response doesn't already have a content-encoding).
for internal handlers, we always enable compression. for reverse proxied and
static files, compression must be enabled per handler.
for internal & reverse proxy handlers, we do streaming compression at
"bestspeed" quality (probably level 1).
for static files, we have a cache based on mtime with fixed max size, where we
evict based on least recently used. we compress with the default level (more
cpu, better ratio).
NATIPs lists the public IPs, so we can still do the DNS checks on them. with
IPsNATed, we disabled the checks.
based on feedback by kikoreis in issue #52
this is based on @bobobo1618's PR #50. bobobo1618 had the right idea, i tried
including an "is forwarded email" configuration option but that indeed became
too tightly coupled. the "is forwarded" option is still planned, but it is
separate from the "accept rejects to mailbox" config option, because one could
still want to push back on forwarded spam messages.
we do an actual accept, delivering to a configured mailbox, instead of storing
to the rejects mailbox where messages can automatically be removed from. one
of the goals of mox is not pretend to accept email while actually junking it.
users can still configure delivery to a junk folder (as was already possible),
but aren't deleted automatically. there is still an X-Mox-Reason header in the
message, and a log line about accepting the reject, but otherwise it is
registered and treated as an (smtp) accept.
the ruleset mailbox is still required to keep that explicit. users can specify
Inbox again.
hope this is good enough for PR #50, otherwise we'll change it.
a few places still looked at the name "Sent". but since we have special-use
flags, we should always look at those. this also changes the config so admins
can specify different names for the special-use mailboxes to create for new
accounts, e.g. in a different language. the old config option is still
understood, just deprecated.
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
we could make more types of delays configurable. the current approach isn't
great, as it results in an a default value of "0s" in the config file, while
the actual default is 15s (which is documented just above, but still).