despite the name, it doesn't actually check for valid email addresses:
it doesn't allow non-ascii localparts, accepts various invalid localparts, and
rejects various valid localparts. no point in using it.
- add option to put messages in the queue "on hold", preventing delivery
attempts until taken off hold again.
- add "hold rules", to automatically mark some/all submitted messages as "on
hold", e.g. from a specific account or to a specific domain.
- add operation to "fail" a message, causing a DSN to be delivered to the
sender. previously we could only drop a message from the queue.
- update admin page & add new cli tools for these operations, with new
filtering rules for selecting the messages to operate on. in the admin
interface, add filtering and checkboxes to select a set of messages to operate
on.
it is a remnant from the time domains didn't have to be specific in
"Destination" addresses. we still use it for as default selection for adding a
new address to an account. but there's not much point in showing it so
prominently. that raises more questions than it is helpful.
for issue #142 by tabatinga0xffff
we only have a "storage" limit. for total disk usage. we don't have a limit on
messages (count) or mailboxes (count). also not on total annotation size, but
we don't have support annotations at all at the moment.
we don't implement setquota. with rfc 9208 that's allowed. with the previous
quota rfc 2087 it wasn't.
the status command can now return "DELETED-STORAGE". which should be the disk
space that can be reclaimed by removing messages with the \Deleted flags.
however, it's not very likely clients set the \Deleted flag without expunging
the message immediately. we don't want to go through all messages to calculate
the sum of message sizes with the deleted flag. we also don't currently track
that in MailboxCount. so we just respond with "0". not compliant, but let's
wait until someone complains.
when returning quota information, it is not possible to give the current usage
when no limit is configured. clients implementing rfc 9208 should probably
conclude from the presence of QUOTA=RES-* capabilities (only in rfc 9208, not
in 2087) and the absence of those limits in quota responses (or the absence of
an untagged quota response at all) that a resource type doesn't have a limit.
thunderbird will claim there is no quota information when no limit was
configured, so we can probably conclude that it implements rfc 2087, but not
rfc 9208.
we now also show the usage & limit on the account page.
for issue #115 by pmarini
preventing writing out a domains.conf that is invalid and can't be parsed
again. this happens when the last address was removed from an account. just a
click in the admin web interface.
accounts without email address cannot log in.
for issue #133 by ally9335
by explaining (in the titles/hovers) what the concepts and requirements are, by
using selects/dropdowns or datalist suggestions where we have a known list, by
automatically suggesting a good account name, and putting the input fields in a
more sensible order.
based on issue #132 by ally9335
so you can still know when someone has put you on their blocklist (which may
affect delivery), without using them.
also query dnsbls for our ips more often when we do more outgoing connections
for delivery: once every 100 messages, but at least 5 mins and at most 3 hours
since the previous check.
most content is in markdown files in website/, some is taken out of the repo
README and rfc/index.txt. a Go file generates html. static files are kept in a
separate repo due to size.
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
all ui frontend code is now in typescript. we no longer need jshint, and we
build the frontend code during "make build".
this also changes tlsrpt types for a Report, not encoding field names with
dashes, but to keep them valid identifiers in javascript. this makes it more
conveniently to work with in the frontend, and works around a sherpats
limitation.