2023-01-30 16:27:06 +03:00
package main
import (
"bufio"
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client. with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.
for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.
for smtp (submission), we don't require an explicit auth command.
both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.
only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.
users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.
the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.
there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
"bytes"
2023-01-30 16:27:06 +03:00
"context"
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
"encoding/json"
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"errors"
2023-01-30 16:27:06 +03:00
"fmt"
"io"
"log"
2024-02-08 16:49:01 +03:00
"log/slog"
2024-04-24 20:15:30 +03:00
"maps"
2023-01-30 16:27:06 +03:00
"net"
"os"
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
"path/filepath"
2023-01-30 16:27:06 +03:00
"runtime/debug"
"sort"
"strconv"
"strings"
"time"
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
"github.com/mjl-/bstore"
2024-12-03 00:03:18 +03:00
"github.com/mjl-/mox/admin"
2024-04-24 20:15:30 +03:00
"github.com/mjl-/mox/config"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/message"
"github.com/mjl-/mox/metrics"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/queue"
"github.com/mjl-/mox/smtp"
"github.com/mjl-/mox/store"
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"github.com/mjl-/mox/webapi"
2023-01-30 16:27:06 +03:00
)
// ctl represents a connection to the ctl unix domain socket of a running mox instance.
// ctl provides functions to read/write commands/responses/data streams.
type ctl struct {
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 20:26:51 +03:00
cmd string // Set for server-side of commands.
2023-01-30 16:27:06 +03:00
conn net . Conn
r * bufio . Reader // Set for first reader.
x any // If set, errors are handled by calling panic(x) instead of log.Fatal.
2023-12-05 15:35:58 +03:00
log mlog . Log // If set, along with x, logging is done here.
2023-01-30 16:27:06 +03:00
}
// xctl opens a ctl connection.
func xctl ( ) * ctl {
p := mox . DataDirPath ( "ctl" )
conn , err := net . Dial ( "unix" , p )
if err != nil {
log . Fatalf ( "connecting to control socket at %q: %v" , p , err )
}
ctl := & ctl { conn : conn }
version := ctl . xread ( )
if version != "ctlv0" {
log . Fatalf ( "ctl protocol mismatch, got %q, expected ctlv0" , version )
}
return ctl
}
// Interpret msg as an error.
// If ctl.x is set, the string is also written to the ctl to be interpreted as error by the other party.
func ( c * ctl ) xerror ( msg string ) {
if c . x == nil {
log . Fatalln ( msg )
}
2023-12-05 15:35:58 +03:00
c . log . Debugx ( "ctl error" , fmt . Errorf ( "%s" , msg ) , slog . String ( "cmd" , c . cmd ) )
2023-01-30 16:27:06 +03:00
c . xwrite ( msg )
panic ( c . x )
}
// Check if err is not nil. If so, handle error through ctl.x or log.Fatal. If
// ctl.x is set, the error string is written to ctl, to be interpreted as an error
// by the command reading from ctl.
func ( c * ctl ) xcheck ( err error , msg string ) {
if err == nil {
return
}
if c . x == nil {
log . Fatalf ( "%s: %s" , msg , err )
}
2023-12-05 15:35:58 +03:00
c . log . Debugx ( msg , err , slog . String ( "cmd" , c . cmd ) )
2023-01-30 16:27:06 +03:00
fmt . Fprintf ( c . conn , "%s: %s\n" , msg , err )
panic ( c . x )
}
// Read a line and return it without trailing newline.
func ( c * ctl ) xread ( ) string {
if c . r == nil {
c . r = bufio . NewReader ( c . conn )
}
line , err := c . r . ReadString ( '\n' )
c . xcheck ( err , "read from ctl" )
return strings . TrimSuffix ( line , "\n" )
}
// Read a line. If not "ok", the string is interpreted as an error.
func ( c * ctl ) xreadok ( ) {
line := c . xread ( )
if line != "ok" {
c . xerror ( line )
}
}
// Write a string, typically a command or parameter.
func ( c * ctl ) xwrite ( text string ) {
_ , err := fmt . Fprintln ( c . conn , text )
c . xcheck ( err , "write" )
}
// Write "ok" to indicate success.
func ( c * ctl ) xwriteok ( ) {
c . xwrite ( "ok" )
}
// Copy data from a stream from ctl to dst.
func ( c * ctl ) xstreamto ( dst io . Writer ) {
_ , err := io . Copy ( dst , c . reader ( ) )
c . xcheck ( err , "reading message" )
}
// Copy data from src to a stream to ctl.
func ( c * ctl ) xstreamfrom ( src io . Reader ) {
w := c . writer ( )
_ , err := io . Copy ( w , src )
c . xcheck ( err , "copying" )
w . xclose ( )
}
// Writer returns an io.Writer for a data stream to ctl.
// When done writing, caller must call xclose to signal the end of the stream.
// Behaviour of "x" is copied from ctl.
func ( c * ctl ) writer ( ) * ctlwriter {
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 20:26:51 +03:00
return & ctlwriter { cmd : c . cmd , conn : c . conn , x : c . x , log : c . log }
2023-01-30 16:27:06 +03:00
}
// Reader returns an io.Reader for a data stream from ctl.
// Behaviour of "x" is copied from ctl.
func ( c * ctl ) reader ( ) * ctlreader {
if c . r == nil {
c . r = bufio . NewReader ( c . conn )
}
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 20:26:51 +03:00
return & ctlreader { cmd : c . cmd , conn : c . conn , r : c . r , x : c . x , log : c . log }
2023-01-30 16:27:06 +03:00
}
/ *
Ctlwriter and ctlreader implement the writing and reading a data stream . They
implement the io . Writer and io . Reader interface . In the protocol below each
non - data message ends with a newline that is typically stripped when
interpreting .
Zero or more data transactions :
> "123" ( for data size ) or an error message
> data , 123 bytes
< "ok" or an error message
Followed by a end of stream indicated by zero data bytes message :
> "0"
* /
type ctlwriter struct {
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 20:26:51 +03:00
cmd string // Set for server-side of commands.
2023-01-30 16:27:06 +03:00
conn net . Conn // Ctl socket from which messages are read.
buf [ ] byte // Scratch buffer, for reading response.
x any // If not nil, errors in Write and xcheckf are handled with panic(x), otherwise with a log.Fatal.
2023-12-05 15:35:58 +03:00
log mlog . Log
2023-01-30 16:27:06 +03:00
}
func ( s * ctlwriter ) Write ( buf [ ] byte ) ( int , error ) {
_ , err := fmt . Fprintf ( s . conn , "%d\n" , len ( buf ) )
s . xcheck ( err , "write count" )
_ , err = s . conn . Write ( buf )
s . xcheck ( err , "write data" )
if s . buf == nil {
s . buf = make ( [ ] byte , 512 )
}
n , err := s . conn . Read ( s . buf )
s . xcheck ( err , "reading response to write" )
line := strings . TrimSuffix ( string ( s . buf [ : n ] ) , "\n" )
if line != "ok" {
s . xerror ( line )
}
return len ( buf ) , nil
}
func ( s * ctlwriter ) xerror ( msg string ) {
if s . x == nil {
log . Fatalln ( msg )
} else {
2023-12-05 15:35:58 +03:00
s . log . Debugx ( "error" , fmt . Errorf ( "%s" , msg ) , slog . String ( "cmd" , s . cmd ) )
2023-01-30 16:27:06 +03:00
panic ( s . x )
}
}
func ( s * ctlwriter ) xcheck ( err error , msg string ) {
if err == nil {
return
}
if s . x == nil {
log . Fatalf ( "%s: %s" , msg , err )
} else {
2023-12-05 15:35:58 +03:00
s . log . Debugx ( msg , err , slog . String ( "cmd" , s . cmd ) )
2023-01-30 16:27:06 +03:00
panic ( s . x )
}
}
func ( s * ctlwriter ) xclose ( ) {
_ , err := fmt . Fprintf ( s . conn , "0\n" )
s . xcheck ( err , "write eof" )
}
type ctlreader struct {
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 20:26:51 +03:00
cmd string // Set for server-side of command.
2023-01-30 16:27:06 +03:00
conn net . Conn // For writing "ok" after reading.
r * bufio . Reader // Buffered ctl socket.
err error // If set, returned for each read. can also be io.EOF.
npending int // Number of bytes that can still be read until a new count line must be read.
x any // If set, errors are handled with panic(x) instead of log.Fatal.
2023-12-05 15:35:58 +03:00
log mlog . Log // If x is set, logging goes to log.
2023-01-30 16:27:06 +03:00
}
func ( s * ctlreader ) Read ( buf [ ] byte ) ( N int , Err error ) {
if s . err != nil {
return 0 , s . err
}
if s . npending == 0 {
line , err := s . r . ReadString ( '\n' )
s . xcheck ( err , "reading count" )
line = strings . TrimSuffix ( line , "\n" )
n , err := strconv . ParseInt ( line , 10 , 32 )
if err != nil {
s . xerror ( line )
}
if n == 0 {
s . err = io . EOF
return 0 , s . err
}
s . npending = int ( n )
}
rn := len ( buf )
if rn > s . npending {
rn = s . npending
}
n , err := s . r . Read ( buf [ : rn ] )
s . xcheck ( err , "read from ctl" )
s . npending -= n
2023-07-02 14:53:34 +03:00
if s . npending == 0 {
_ , err = fmt . Fprintln ( s . conn , "ok" )
s . xcheck ( err , "writing ok after reading" )
}
2023-01-30 16:27:06 +03:00
return n , err
}
func ( s * ctlreader ) xerror ( msg string ) {
if s . x == nil {
log . Fatalln ( msg )
} else {
2023-12-05 15:35:58 +03:00
s . log . Debugx ( "error" , fmt . Errorf ( "%s" , msg ) , slog . String ( "cmd" , s . cmd ) )
2023-01-30 16:27:06 +03:00
panic ( s . x )
}
}
func ( s * ctlreader ) xcheck ( err error , msg string ) {
if err == nil {
return
}
if s . x == nil {
log . Fatalf ( "%s: %s" , msg , err )
} else {
2023-12-05 15:35:58 +03:00
s . log . Debugx ( msg , err , slog . String ( "cmd" , s . cmd ) )
2023-01-30 16:27:06 +03:00
panic ( s . x )
}
}
// servectl handles requests on the unix domain socket "ctl", e.g. for graceful shutdown, local mail delivery.
2023-12-05 15:35:58 +03:00
func servectl ( ctx context . Context , log mlog . Log , conn net . Conn , shutdown func ( ) ) {
2023-01-30 16:27:06 +03:00
log . Debug ( "ctl connection" )
var stop = struct { } { } // Sentinel value for panic and recover.
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 20:26:51 +03:00
ctl := & ctl { conn : conn , x : stop , log : log }
2023-01-30 16:27:06 +03:00
defer func ( ) {
x := recover ( )
if x == nil || x == stop {
return
}
2023-12-05 15:35:58 +03:00
log . Error ( "servectl panic" , slog . Any ( "err" , x ) , slog . String ( "cmd" , ctl . cmd ) )
2023-01-30 16:27:06 +03:00
debug . PrintStack ( )
2023-09-15 17:47:17 +03:00
metrics . PanicInc ( metrics . Ctl )
2023-01-30 16:27:06 +03:00
} ( )
defer conn . Close ( )
ctl . xwrite ( "ctlv0" )
for {
2023-07-02 14:53:34 +03:00
servectlcmd ( ctx , ctl , shutdown )
2023-01-30 16:27:06 +03:00
}
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
func xparseJSON ( ctl * ctl , s string , v any ) {
2024-03-18 10:50:42 +03:00
dec := json . NewDecoder ( strings . NewReader ( s ) )
dec . DisallowUnknownFields ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
err := dec . Decode ( v )
ctl . xcheck ( err , "parsing from ctl as json" )
2024-03-18 10:50:42 +03:00
}
2023-07-02 14:53:34 +03:00
func servectlcmd ( ctx context . Context , ctl * ctl , shutdown func ( ) ) {
log := ctl . log
2023-01-30 16:27:06 +03:00
cmd := ctl . xread ( )
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 20:26:51 +03:00
ctl . cmd = cmd
2023-12-05 15:35:58 +03:00
log . Info ( "ctl command" , slog . String ( "cmd" , cmd ) )
2023-01-30 16:27:06 +03:00
switch cmd {
case "stop" :
shutdown ( )
os . Exit ( 0 )
case "deliver" :
/ * The protocol , double quoted are literals .
> "deliver"
> address
< "ok"
> stream
< "ok"
* /
to := ctl . xread ( )
2024-03-18 10:50:42 +03:00
a , addr , err := store . OpenEmail ( log , to )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "lookup destination address" )
2024-03-18 10:50:42 +03:00
msgFile , err := store . CreateMessageTemp ( log , "ctl-deliver" )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "creating temporary message file" )
2023-11-01 20:57:38 +03:00
defer store . CloseRemoveTempFile ( log , msgFile , "deliver message" )
2023-08-11 15:07:49 +03:00
mw := message . NewWriter ( msgFile )
2023-01-30 16:27:06 +03:00
ctl . xwriteok ( )
ctl . xstreamto ( mw )
err = msgFile . Sync ( )
ctl . xcheck ( err , "syncing message to storage" )
2024-04-14 13:46:24 +03:00
m := store . Message {
2023-08-11 15:07:49 +03:00
Received : time . Now ( ) ,
Size : mw . Size ,
2023-01-30 16:27:06 +03:00
}
a . WithWLock ( func ( ) {
2024-04-14 13:46:24 +03:00
err := a . DeliverDestination ( log , addr , & m , msgFile )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "delivering message" )
2023-12-05 15:35:58 +03:00
log . Info ( "message delivered through ctl" , slog . Any ( "to" , to ) )
2023-01-30 16:27:06 +03:00
} )
err = a . Close ( )
ctl . xcheck ( err , "closing account" )
ctl . xwriteok ( )
2023-02-06 13:31:46 +03:00
case "setaccountpassword" :
/ * protocol :
> "setaccountpassword"
2023-09-23 18:18:49 +03:00
> account
2023-02-06 13:31:46 +03:00
> password
< "ok" or error
* /
2023-09-23 18:18:49 +03:00
account := ctl . xread ( )
2023-02-06 13:31:46 +03:00
pw := ctl . xread ( )
2024-03-18 10:50:42 +03:00
acc , err := store . OpenAccount ( log , account )
2023-02-06 13:31:46 +03:00
ctl . xcheck ( err , "open account" )
defer func ( ) {
if acc != nil {
2023-02-16 15:22:00 +03:00
err := acc . Close ( )
log . Check ( err , "closing account after setting password" )
2023-02-06 13:31:46 +03:00
}
} ( )
2024-03-18 10:50:42 +03:00
err = acc . SetPassword ( log , pw )
2023-02-06 13:31:46 +03:00
ctl . xcheck ( err , "setting password" )
err = acc . Close ( )
ctl . xcheck ( err , "closing account" )
acc = nil
ctl . xwriteok ( )
2024-03-18 10:50:42 +03:00
case "queueholdruleslist" :
/ * protocol :
> "queueholdruleslist"
< "ok"
< stream
* /
l , err := queue . HoldRuleList ( ctx )
ctl . xcheck ( err , "listing hold rules" )
ctl . xwriteok ( )
xw := ctl . writer ( )
fmt . Fprintln ( xw , "hold rules:" )
for _ , hr := range l {
var elems [ ] string
if hr . Account != "" {
elems = append ( elems , fmt . Sprintf ( "account %q" , hr . Account ) )
}
var zerodom dns . Domain
if hr . SenderDomain != zerodom {
elems = append ( elems , fmt . Sprintf ( "sender domain %q" , hr . SenderDomain . Name ( ) ) )
}
if hr . RecipientDomain != zerodom {
elems = append ( elems , fmt . Sprintf ( "sender domain %q" , hr . RecipientDomain . Name ( ) ) )
}
if len ( elems ) == 0 {
fmt . Fprintf ( xw , "id %d: all messages\n" , hr . ID )
} else {
fmt . Fprintf ( xw , "id %d: %s\n" , hr . ID , strings . Join ( elems , ", " ) )
}
}
if len ( l ) == 0 {
fmt . Fprint ( xw , "(none)\n" )
}
xw . xclose ( )
case "queueholdrulesadd" :
/ * protocol :
> "queueholdrulesadd"
> account
> senderdomainstr
> recipientdomainstr
< "ok" or error
* /
var hr queue . HoldRule
hr . Account = ctl . xread ( )
senderdomstr := ctl . xread ( )
rcptdomstr := ctl . xread ( )
var err error
hr . SenderDomain , err = dns . ParseDomain ( senderdomstr )
ctl . xcheck ( err , "parsing sender domain" )
hr . RecipientDomain , err = dns . ParseDomain ( rcptdomstr )
ctl . xcheck ( err , "parsing recipient domain" )
hr , err = queue . HoldRuleAdd ( ctx , log , hr )
ctl . xcheck ( err , "add hold rule" )
ctl . xwriteok ( )
case "queueholdrulesremove" :
/ * protocol :
> "queueholdrulesremove"
> id
< "ok" or error
* /
2024-05-09 22:11:20 +03:00
idstr := ctl . xread ( )
id , err := strconv . ParseInt ( idstr , 10 , 64 )
2024-03-18 10:50:42 +03:00
ctl . xcheck ( err , "parsing id" )
err = queue . HoldRuleRemove ( ctx , log , id )
ctl . xcheck ( err , "remove hold rule" )
ctl . xwriteok ( )
case "queuelist" :
2023-01-30 16:27:06 +03:00
/ * protocol :
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
> "queuelist"
> filters as json
> sort as json
2023-01-30 16:27:06 +03:00
< "ok"
< stream
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
sortline := ctl . xread ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var f queue . Filter
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , filterline , & f )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var s queue . Sort
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , sortline , & s )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
qmsgs , err := queue . List ( ctx , f , s )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "listing queue" )
ctl . xwriteok ( )
xw := ctl . writer ( )
2024-03-18 10:50:42 +03:00
fmt . Fprintln ( xw , "messages:" )
2023-01-30 16:27:06 +03:00
for _ , qm := range qmsgs {
var lastAttempt string
if qm . LastAttempt != nil {
lastAttempt = time . Since ( * qm . LastAttempt ) . Round ( time . Second ) . String ( )
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
fmt . Fprintf ( xw , "%5d %s from:%s to:%s next %s last %s error %q\n" , qm . ID , qm . Queued . Format ( time . RFC3339 ) , qm . Sender ( ) . LogString ( ) , qm . Recipient ( ) . LogString ( ) , - time . Since ( qm . NextAttempt ) . Round ( time . Second ) , lastAttempt , qm . LastResult ( ) . Error )
2023-01-30 16:27:06 +03:00
}
if len ( qmsgs ) == 0 {
2024-03-18 10:50:42 +03:00
fmt . Fprint ( xw , "(none)\n" )
2023-01-30 16:27:06 +03:00
}
xw . xclose ( )
2024-03-18 10:50:42 +03:00
case "queueholdset" :
2023-01-30 16:27:06 +03:00
/ * protocol :
2024-03-18 10:50:42 +03:00
> "queueholdset"
> queuefilters as json
> "true" or "false"
< "ok" or error
2023-01-30 16:27:06 +03:00
< count
2024-03-18 10:50:42 +03:00
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
2024-03-18 10:50:42 +03:00
hold := ctl . xread ( ) == "true"
2024-05-09 22:11:20 +03:00
var f queue . Filter
xparseJSON ( ctl , filterline , & f )
2024-03-18 10:50:42 +03:00
count , err := queue . HoldSet ( ctx , f , hold )
ctl . xcheck ( err , "setting on hold status for messages" )
ctl . xwriteok ( )
ctl . xwrite ( fmt . Sprintf ( "%d" , count ) )
case "queueschedule" :
/ * protocol :
> "queueschedule"
> queuefilters as json
> relative to now
> duration
2023-01-30 16:27:06 +03:00
< "ok" or error
2024-03-18 10:50:42 +03:00
< count
2023-01-30 16:27:06 +03:00
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
2024-03-18 10:50:42 +03:00
relnow := ctl . xread ( )
2024-05-09 22:11:20 +03:00
duration := ctl . xread ( )
var f queue . Filter
xparseJSON ( ctl , filterline , & f )
d , err := time . ParseDuration ( duration )
2024-03-18 10:50:42 +03:00
ctl . xcheck ( err , "parsing duration for next delivery attempt" )
var count int
if relnow == "" {
count , err = queue . NextAttemptAdd ( ctx , f , d )
} else {
count , err = queue . NextAttemptSet ( ctx , f , time . Now ( ) . Add ( d ) )
2023-01-30 16:27:06 +03:00
}
2024-03-18 10:50:42 +03:00
ctl . xcheck ( err , "setting next delivery attempts in queue" )
ctl . xwriteok ( )
ctl . xwrite ( fmt . Sprintf ( "%d" , count ) )
case "queuetransport" :
/ * protocol :
> "queuetransport"
> queuefilters as json
> transport
< "ok" or error
< count
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
2024-03-18 10:50:42 +03:00
transport := ctl . xread ( )
2024-05-09 22:11:20 +03:00
var f queue . Filter
xparseJSON ( ctl , filterline , & f )
2024-03-18 10:50:42 +03:00
count , err := queue . TransportSet ( ctx , f , transport )
ctl . xcheck ( err , "adding to next delivery attempts in queue" )
ctl . xwriteok ( )
ctl . xwrite ( fmt . Sprintf ( "%d" , count ) )
case "queuerequiretls" :
/ * protocol :
> "queuerequiretls"
> queuefilters as json
> reqtls ( empty string , "true" or "false" )
< "ok" or error
< count
* /
2023-01-30 16:27:06 +03:00
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
2024-03-18 10:50:42 +03:00
reqtls := ctl . xread ( )
var req * bool
switch reqtls {
case "" :
case "true" :
v := true
req = & v
case "false" :
v := false
req = & v
default :
ctl . xcheck ( fmt . Errorf ( "unknown value %q" , reqtls ) , "parsing value" )
2023-01-30 16:27:06 +03:00
}
2024-05-09 22:11:20 +03:00
var f queue . Filter
xparseJSON ( ctl , filterline , & f )
2024-03-18 10:50:42 +03:00
count , err := queue . RequireTLSSet ( ctx , f , req )
ctl . xcheck ( err , "setting tls requirements on messages in queue" )
ctl . xwriteok ( )
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
ctl . xwrite ( fmt . Sprintf ( "%d" , count ) )
2024-03-18 10:50:42 +03:00
case "queuefail" :
/ * protocol :
> "queuefail"
> queuefilters as json
< "ok" or error
< count
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var f queue . Filter
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , filterline , & f )
2024-03-18 10:50:42 +03:00
count , err := queue . Fail ( ctx , log , f )
ctl . xcheck ( err , "marking messages from queue as failed" )
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
ctl . xwriteok ( )
2024-03-18 10:50:42 +03:00
ctl . xwrite ( fmt . Sprintf ( "%d" , count ) )
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
case "queuedrop" :
/ * protocol :
> "queuedrop"
2024-03-18 10:50:42 +03:00
> queuefilters as json
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
< "ok" or error
2024-03-18 10:50:42 +03:00
< count
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var f queue . Filter
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , filterline , & f )
2024-03-18 10:50:42 +03:00
count , err := queue . Drop ( ctx , log , f )
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
ctl . xcheck ( err , "dropping messages from queue" )
2023-01-30 16:27:06 +03:00
ctl . xwriteok ( )
2024-03-18 10:50:42 +03:00
ctl . xwrite ( fmt . Sprintf ( "%d" , count ) )
2023-01-30 16:27:06 +03:00
case "queuedump" :
/ * protocol :
> "queuedump"
> id
< "ok" or error
< stream
* /
idstr := ctl . xread ( )
id , err := strconv . ParseInt ( idstr , 10 , 64 )
if err != nil {
ctl . xcheck ( err , "parsing id" )
}
2023-05-22 15:40:36 +03:00
mr , err := queue . OpenMessage ( ctx , id )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "opening message" )
2023-02-16 15:22:00 +03:00
defer func ( ) {
err := mr . Close ( )
log . Check ( err , "closing message from queue" )
} ( )
2023-01-30 16:27:06 +03:00
ctl . xwriteok ( )
ctl . xstreamfrom ( mr )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
case "queueretiredlist" :
/ * protocol :
> "queueretiredlist"
> filters as json
> sort as json
< "ok"
< stream
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
sortline := ctl . xread ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var f queue . RetiredFilter
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , filterline , & f )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var s queue . RetiredSort
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , sortline , & s )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
qmsgs , err := queue . RetiredList ( ctx , f , s )
ctl . xcheck ( err , "listing retired queue" )
ctl . xwriteok ( )
xw := ctl . writer ( )
fmt . Fprintln ( xw , "retired messages:" )
for _ , qm := range qmsgs {
var lastAttempt string
if qm . LastAttempt != nil {
lastAttempt = time . Since ( * qm . LastAttempt ) . Round ( time . Second ) . String ( )
}
result := "failure"
if qm . Success {
result = "success"
}
sender , err := qm . Sender ( )
xcheckf ( err , "parsing sender" )
fmt . Fprintf ( xw , "%5d %s %s from:%s to:%s last %s error %q\n" , qm . ID , qm . Queued . Format ( time . RFC3339 ) , result , sender . LogString ( ) , qm . Recipient ( ) . LogString ( ) , lastAttempt , qm . LastResult ( ) . Error )
}
if len ( qmsgs ) == 0 {
fmt . Fprint ( xw , "(none)\n" )
}
xw . xclose ( )
case "queueretiredprint" :
/ * protocol :
> "queueretiredprint"
> id
< "ok"
< stream
* /
idstr := ctl . xread ( )
id , err := strconv . ParseInt ( idstr , 10 , 64 )
if err != nil {
ctl . xcheck ( err , "parsing id" )
}
l , err := queue . RetiredList ( ctx , queue . RetiredFilter { IDs : [ ] int64 { id } } , queue . RetiredSort { } )
ctl . xcheck ( err , "getting retired messages" )
if len ( l ) == 0 {
ctl . xcheck ( errors . New ( "not found" ) , "getting retired message" )
}
m := l [ 0 ]
ctl . xwriteok ( )
xw := ctl . writer ( )
enc := json . NewEncoder ( xw )
enc . SetIndent ( "" , "\t" )
err = enc . Encode ( m )
ctl . xcheck ( err , "encode retired message" )
xw . xclose ( )
case "queuehooklist" :
/ * protocol :
> "queuehooklist"
> filters as json
> sort as json
< "ok"
< stream
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
sortline := ctl . xread ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var f queue . HookFilter
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , filterline , & f )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var s queue . HookSort
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , sortline , & s )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
hooks , err := queue . HookList ( ctx , f , s )
ctl . xcheck ( err , "listing webhooks" )
ctl . xwriteok ( )
xw := ctl . writer ( )
fmt . Fprintln ( xw , "webhooks:" )
for _ , h := range hooks {
var lastAttempt string
if len ( h . Results ) > 0 {
lastAttempt = time . Since ( h . LastResult ( ) . Start ) . Round ( time . Second ) . String ( )
}
fmt . Fprintf ( xw , "%5d %s account:%s next %s last %s error %q url %s\n" , h . ID , h . Submitted . Format ( time . RFC3339 ) , h . Account , time . Until ( h . NextAttempt ) . Round ( time . Second ) , lastAttempt , h . LastResult ( ) . Error , h . URL )
}
if len ( hooks ) == 0 {
fmt . Fprint ( xw , "(none)\n" )
}
xw . xclose ( )
case "queuehookschedule" :
/ * protocol :
> "queuehookschedule"
> hookfilters as json
> relative to now
> duration
< "ok" or error
< count
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
relnow := ctl . xread ( )
2024-05-09 22:11:20 +03:00
duration := ctl . xread ( )
var f queue . HookFilter
xparseJSON ( ctl , filterline , & f )
d , err := time . ParseDuration ( duration )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
ctl . xcheck ( err , "parsing duration for next delivery attempt" )
var count int
if relnow == "" {
count , err = queue . HookNextAttemptAdd ( ctx , f , d )
} else {
count , err = queue . HookNextAttemptSet ( ctx , f , time . Now ( ) . Add ( d ) )
}
ctl . xcheck ( err , "setting next delivery attempts in queue" )
ctl . xwriteok ( )
ctl . xwrite ( fmt . Sprintf ( "%d" , count ) )
case "queuehookcancel" :
/ * protocol :
> "queuehookcancel"
> hookfilters as json
< "ok" or error
< count
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var f queue . HookFilter
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , filterline , & f )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
count , err := queue . HookCancel ( ctx , log , f )
ctl . xcheck ( err , "canceling webhooks in queue" )
ctl . xwriteok ( )
ctl . xwrite ( fmt . Sprintf ( "%d" , count ) )
case "queuehookprint" :
/ * protocol :
> "queuehookprint"
> id
< "ok"
< stream
* /
idstr := ctl . xread ( )
id , err := strconv . ParseInt ( idstr , 10 , 64 )
if err != nil {
ctl . xcheck ( err , "parsing id" )
}
l , err := queue . HookList ( ctx , queue . HookFilter { IDs : [ ] int64 { id } } , queue . HookSort { } )
ctl . xcheck ( err , "getting webhooks" )
if len ( l ) == 0 {
ctl . xcheck ( errors . New ( "not found" ) , "getting webhook" )
}
h := l [ 0 ]
ctl . xwriteok ( )
xw := ctl . writer ( )
enc := json . NewEncoder ( xw )
enc . SetIndent ( "" , "\t" )
err = enc . Encode ( h )
ctl . xcheck ( err , "encode webhook" )
xw . xclose ( )
case "queuehookretiredlist" :
/ * protocol :
> "queuehookretiredlist"
> filters as json
> sort as json
< "ok"
< stream
* /
2024-05-09 22:11:20 +03:00
filterline := ctl . xread ( )
sortline := ctl . xread ( )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var f queue . HookRetiredFilter
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , filterline , & f )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
var s queue . HookRetiredSort
2024-05-09 22:11:20 +03:00
xparseJSON ( ctl , sortline , & s )
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
l , err := queue . HookRetiredList ( ctx , f , s )
ctl . xcheck ( err , "listing retired webhooks" )
ctl . xwriteok ( )
xw := ctl . writer ( )
fmt . Fprintln ( xw , "retired webhooks:" )
for _ , h := range l {
var lastAttempt string
if len ( h . Results ) > 0 {
lastAttempt = time . Since ( h . LastResult ( ) . Start ) . Round ( time . Second ) . String ( )
}
result := "success"
if ! h . Success {
result = "failure"
}
fmt . Fprintf ( xw , "%5d %s %s account:%s last %s error %q url %s\n" , h . ID , h . Submitted . Format ( time . RFC3339 ) , result , h . Account , lastAttempt , h . LastResult ( ) . Error , h . URL )
}
if len ( l ) == 0 {
fmt . Fprint ( xw , "(none)\n" )
}
xw . xclose ( )
case "queuehookretiredprint" :
/ * protocol :
> "queuehookretiredprint"
> id
< "ok"
< stream
* /
idstr := ctl . xread ( )
id , err := strconv . ParseInt ( idstr , 10 , 64 )
if err != nil {
ctl . xcheck ( err , "parsing id" )
}
l , err := queue . HookRetiredList ( ctx , queue . HookRetiredFilter { IDs : [ ] int64 { id } } , queue . HookRetiredSort { } )
ctl . xcheck ( err , "getting retired webhooks" )
if len ( l ) == 0 {
ctl . xcheck ( errors . New ( "not found" ) , "getting retired webhook" )
}
h := l [ 0 ]
ctl . xwriteok ( )
xw := ctl . writer ( )
enc := json . NewEncoder ( xw )
enc . SetIndent ( "" , "\t" )
err = enc . Encode ( h )
ctl . xcheck ( err , "encode retired webhook" )
xw . xclose ( )
case "queuesuppresslist" :
/ * protocol :
> "queuesuppresslist"
> account ( or empty )
< "ok" or error
< stream
* /
account := ctl . xread ( )
l , err := queue . SuppressionList ( ctx , account )
ctl . xcheck ( err , "listing suppressions" )
ctl . xwriteok ( )
xw := ctl . writer ( )
fmt . Fprintln ( xw , "suppressions (account, address, manual, time added, base adddress, reason):" )
for _ , sup := range l {
manual := "No"
if sup . Manual {
manual = "Yes"
}
fmt . Fprintf ( xw , "%q\t%q\t%s\t%s\t%q\t%q\n" , sup . Account , sup . OriginalAddress , manual , sup . Created . Round ( time . Second ) , sup . BaseAddress , sup . Reason )
}
if len ( l ) == 0 {
fmt . Fprintln ( xw , "(none)" )
}
xw . xclose ( )
case "queuesuppressadd" :
/ * protocol :
> "queuesuppressadd"
> account
> address
< "ok" or error
* /
account := ctl . xread ( )
address := ctl . xread ( )
_ , ok := mox . Conf . Account ( account )
if ! ok {
ctl . xcheck ( errors . New ( "unknown account" ) , "looking up account" )
}
addr , err := smtp . ParseAddress ( address )
ctl . xcheck ( err , "parsing address" )
sup := webapi . Suppression {
Account : account ,
Manual : true ,
Reason : "added through mox cli" ,
}
err = queue . SuppressionAdd ( ctx , addr . Path ( ) , & sup )
ctl . xcheck ( err , "adding suppression" )
ctl . xwriteok ( )
case "queuesuppressremove" :
/ * protocol :
> "queuesuppressremove"
> account
> address
< "ok" or error
* /
account := ctl . xread ( )
address := ctl . xread ( )
addr , err := smtp . ParseAddress ( address )
ctl . xcheck ( err , "parsing address" )
err = queue . SuppressionRemove ( ctx , account , addr . Path ( ) )
ctl . xcheck ( err , "removing suppression" )
ctl . xwriteok ( )
case "queuesuppresslookup" :
/ * protocol :
> "queuesuppresslookup"
> account or empty
> address
< "ok" or error
< stream
* /
account := ctl . xread ( )
address := ctl . xread ( )
if account != "" {
_ , ok := mox . Conf . Account ( account )
if ! ok {
ctl . xcheck ( errors . New ( "unknown account" ) , "looking up account" )
}
}
addr , err := smtp . ParseAddress ( address )
ctl . xcheck ( err , "parsing address" )
sup , err := queue . SuppressionLookup ( ctx , account , addr . Path ( ) )
ctl . xcheck ( err , "looking up suppression" )
ctl . xwriteok ( )
xw := ctl . writer ( )
if sup == nil {
fmt . Fprintln ( xw , "not present" )
} else {
manual := "no"
if sup . Manual {
manual = "yes"
}
fmt . Fprintf ( xw , "present\nadded: %s\nmanual: %s\nbase address: %s\nreason: %q\n" , sup . Created . Round ( time . Second ) , manual , sup . BaseAddress , sup . Reason )
}
xw . xclose ( )
2023-01-30 16:27:06 +03:00
case "importmaildir" , "importmbox" :
mbox := cmd == "importmbox"
2023-05-22 15:40:36 +03:00
importctl ( ctx , ctl , mbox )
2023-01-30 16:27:06 +03:00
case "domainadd" :
/ * protocol :
> "domainadd"
> domain
> account
> localpart
< "ok" or error
* /
domain := ctl . xread ( )
account := ctl . xread ( )
localpart := ctl . xread ( )
d , err := dns . ParseDomain ( domain )
ctl . xcheck ( err , "parsing domain" )
2024-12-03 00:03:18 +03:00
err = admin . DomainAdd ( ctx , d , account , smtp . Localpart ( localpart ) )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "adding domain" )
ctl . xwriteok ( )
case "domainrm" :
/ * protocol :
> "domainrm"
> domain
< "ok" or error
* /
domain := ctl . xread ( )
d , err := dns . ParseDomain ( domain )
ctl . xcheck ( err , "parsing domain" )
2024-12-03 00:03:18 +03:00
err = admin . DomainRemove ( ctx , d )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "removing domain" )
ctl . xwriteok ( )
case "accountadd" :
/ * protocol :
> "accountadd"
> account
> address
< "ok" or error
* /
account := ctl . xread ( )
address := ctl . xread ( )
2024-12-03 00:03:18 +03:00
err := admin . AccountAdd ( ctx , account , address )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "adding account" )
ctl . xwriteok ( )
case "accountrm" :
/ * protocol :
> "accountrm"
> account
< "ok" or error
* /
account := ctl . xread ( )
2024-12-03 00:03:18 +03:00
err := admin . AccountRemove ( ctx , account )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "removing account" )
ctl . xwriteok ( )
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client. with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.
for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.
for smtp (submission), we don't require an explicit auth command.
both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.
only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.
users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.
the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.
there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
case "tlspubkeylist" :
/ * protocol :
> "tlspubkeylist"
> account ( or empty )
< "ok" or error
< stream
* /
accountOpt := ctl . xread ( )
tlspubkeys , err := store . TLSPublicKeyList ( ctx , accountOpt )
ctl . xcheck ( err , "list tls public keys" )
ctl . xwriteok ( )
xw := ctl . writer ( )
fmt . Fprintf ( xw , "# fingerprint, type, name, account, login address, no imap preauth (%d)\n" , len ( tlspubkeys ) )
for _ , k := range tlspubkeys {
fmt . Fprintf ( xw , "%s\t%s\t%q\t%s\t%s\t%v\n" , k . Fingerprint , k . Type , k . Name , k . Account , k . LoginAddress , k . NoIMAPPreauth )
}
xw . xclose ( )
case "tlspubkeyget" :
/ * protocol :
> "tlspubkeyget"
> fingerprint
< "ok" or error
< type
< name
< account
< address
< noimappreauth ( true / false )
< stream ( certder )
* /
fp := ctl . xread ( )
tlspubkey , err := store . TLSPublicKeyGet ( ctx , fp )
ctl . xcheck ( err , "looking tls public key" )
ctl . xwriteok ( )
ctl . xwrite ( tlspubkey . Type )
ctl . xwrite ( tlspubkey . Name )
ctl . xwrite ( tlspubkey . Account )
ctl . xwrite ( tlspubkey . LoginAddress )
ctl . xwrite ( fmt . Sprintf ( "%v" , tlspubkey . NoIMAPPreauth ) )
ctl . xstreamfrom ( bytes . NewReader ( tlspubkey . CertDER ) )
case "tlspubkeyadd" :
/ * protocol :
> "tlspubkeyadd"
> loginaddress
> name ( or empty )
> noimappreauth ( true / false )
> stream ( certder )
< "ok" or error
* /
loginAddress := ctl . xread ( )
name := ctl . xread ( )
noimappreauth := ctl . xread ( )
if noimappreauth != "true" && noimappreauth != "false" {
ctl . xcheck ( fmt . Errorf ( "bad value %q" , noimappreauth ) , "parsing noimappreauth" )
}
var b bytes . Buffer
ctl . xstreamto ( & b )
tlspubkey , err := store . ParseTLSPublicKeyCert ( b . Bytes ( ) )
ctl . xcheck ( err , "parsing certificate" )
if name != "" {
tlspubkey . Name = name
}
acc , _ , err := store . OpenEmail ( ctl . log , loginAddress )
ctl . xcheck ( err , "open account for address" )
defer func ( ) {
err := acc . Close ( )
ctl . log . Check ( err , "close account" )
} ( )
tlspubkey . Account = acc . Name
tlspubkey . LoginAddress = loginAddress
tlspubkey . NoIMAPPreauth = noimappreauth == "true"
err = store . TLSPublicKeyAdd ( ctx , & tlspubkey )
ctl . xcheck ( err , "adding tls public key" )
ctl . xwriteok ( )
case "tlspubkeyrm" :
/ * protocol :
> "tlspubkeyadd"
> fingerprint
< "ok" or error
* /
fp := ctl . xread ( )
err := store . TLSPublicKeyRemove ( ctx , fp )
ctl . xcheck ( err , "removing tls public key" )
ctl . xwriteok ( )
2023-01-30 16:27:06 +03:00
case "addressadd" :
/ * protocol :
> "addressadd"
> address
> account
< "ok" or error
* /
address := ctl . xread ( )
account := ctl . xread ( )
2024-12-03 00:03:18 +03:00
err := admin . AddressAdd ( ctx , address , account )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "adding address" )
ctl . xwriteok ( )
case "addressrm" :
/ * protocol :
> "addressrm"
> address
< "ok" or error
* /
address := ctl . xread ( )
2024-12-03 00:03:18 +03:00
err := admin . AddressRemove ( ctx , address )
2023-01-30 16:27:06 +03:00
ctl . xcheck ( err , "removing address" )
ctl . xwriteok ( )
2024-04-24 20:15:30 +03:00
case "aliaslist" :
/ * protocol :
> "aliaslist"
> domain
< "ok" or error
< stream
* /
domain := ctl . xread ( )
d , err := dns . ParseDomain ( domain )
ctl . xcheck ( err , "parsing domain" )
dc , ok := mox . Conf . Domain ( d )
if ! ok {
ctl . xcheck ( errors . New ( "no such domain" ) , "listing aliases" )
}
ctl . xwriteok ( )
w := ctl . writer ( )
for _ , a := range dc . Aliases {
lp , err := smtp . ParseLocalpart ( a . LocalpartStr )
ctl . xcheck ( err , "parsing alias localpart" )
fmt . Fprintln ( w , smtp . NewAddress ( lp , a . Domain ) . Pack ( true ) )
}
w . xclose ( )
case "aliasprint" :
/ * protocol :
> "aliasprint"
> address
< "ok" or error
< stream
* /
address := ctl . xread ( )
_ , alias , ok := mox . Conf . AccountDestination ( address )
if ! ok {
ctl . xcheck ( errors . New ( "no such address" ) , "looking up alias" )
} else if alias == nil {
ctl . xcheck ( errors . New ( "address not an alias" ) , "looking up alias" )
}
ctl . xwriteok ( )
w := ctl . writer ( )
fmt . Fprintf ( w , "# postpublic %v\n" , alias . PostPublic )
fmt . Fprintf ( w , "# listmembers %v\n" , alias . ListMembers )
fmt . Fprintf ( w , "# allowmsgfrom %v\n" , alias . AllowMsgFrom )
fmt . Fprintln ( w , "# members:" )
for _ , a := range alias . Addresses {
fmt . Fprintln ( w , a )
}
w . xclose ( )
case "aliasadd" :
/ * protocol :
> "aliasadd"
> address
> json alias
< "ok" or error
* /
address := ctl . xread ( )
line := ctl . xread ( )
addr , err := smtp . ParseAddress ( address )
ctl . xcheck ( err , "parsing address" )
var alias config . Alias
xparseJSON ( ctl , line , & alias )
2024-12-03 00:03:18 +03:00
err = admin . AliasAdd ( ctx , addr , alias )
2024-04-24 20:15:30 +03:00
ctl . xcheck ( err , "adding alias" )
ctl . xwriteok ( )
case "aliasupdate" :
/ * protocol :
> "aliasupdate"
> alias
> "true" or "false" for postpublic
> "true" or "false" for listmembers
> "true" or "false" for allowmsgfrom
< "ok" or error
* /
address := ctl . xread ( )
postpublic := ctl . xread ( )
listmembers := ctl . xread ( )
allowmsgfrom := ctl . xread ( )
addr , err := smtp . ParseAddress ( address )
ctl . xcheck ( err , "parsing address" )
2024-12-03 00:03:18 +03:00
err = admin . DomainSave ( ctx , addr . Domain . Name ( ) , func ( d * config . Domain ) error {
2024-04-24 20:15:30 +03:00
a , ok := d . Aliases [ addr . Localpart . String ( ) ]
if ! ok {
return fmt . Errorf ( "alias does not exist" )
}
switch postpublic {
case "false" :
a . PostPublic = false
case "true" :
a . PostPublic = true
}
switch listmembers {
case "false" :
a . ListMembers = false
case "true" :
a . ListMembers = true
}
switch allowmsgfrom {
case "false" :
a . AllowMsgFrom = false
case "true" :
a . AllowMsgFrom = true
}
d . Aliases = maps . Clone ( d . Aliases )
d . Aliases [ addr . Localpart . String ( ) ] = a
return nil
} )
ctl . xcheck ( err , "saving alias" )
ctl . xwriteok ( )
case "aliasrm" :
/ * protocol :
> "aliasrm"
> alias
< "ok" or error
* /
address := ctl . xread ( )
addr , err := smtp . ParseAddress ( address )
ctl . xcheck ( err , "parsing address" )
2024-12-03 00:03:18 +03:00
err = admin . AliasRemove ( ctx , addr )
2024-04-24 20:15:30 +03:00
ctl . xcheck ( err , "removing alias" )
ctl . xwriteok ( )
case "aliasaddaddr" :
/ * protocol :
> "aliasaddaddr"
> alias
> addresses as json
< "ok" or error
* /
address := ctl . xread ( )
line := ctl . xread ( )
addr , err := smtp . ParseAddress ( address )
ctl . xcheck ( err , "parsing address" )
var addresses [ ] string
xparseJSON ( ctl , line , & addresses )
2024-12-03 00:03:18 +03:00
err = admin . AliasAddressesAdd ( ctx , addr , addresses )
2024-04-24 20:15:30 +03:00
ctl . xcheck ( err , "adding addresses to alias" )
ctl . xwriteok ( )
case "aliasrmaddr" :
/ * protocol :
> "aliasrmaddr"
> alias
> addresses as json
< "ok" or error
* /
address := ctl . xread ( )
line := ctl . xread ( )
addr , err := smtp . ParseAddress ( address )
ctl . xcheck ( err , "parsing address" )
var addresses [ ] string
xparseJSON ( ctl , line , & addresses )
2024-12-03 00:03:18 +03:00
err = admin . AliasAddressesRemove ( ctx , addr , addresses )
2024-04-24 20:15:30 +03:00
ctl . xcheck ( err , "removing addresses to alias" )
ctl . xwriteok ( )
2023-01-30 16:27:06 +03:00
case "loglevels" :
/ * protocol :
> "loglevels"
< "ok"
< stream
* /
ctl . xwriteok ( )
l := mox . Conf . LogLevels ( )
keys := [ ] string { }
for k := range l {
keys = append ( keys , k )
}
sort . Slice ( keys , func ( i , j int ) bool {
return keys [ i ] < keys [ j ]
} )
s := ""
for _ , k := range keys {
ks := k
if ks == "" {
ks = "(default)"
}
s += ks + ": " + mlog . LevelStrings [ l [ k ] ] + "\n"
}
ctl . xstreamfrom ( strings . NewReader ( s ) )
case "setloglevels" :
/ * protocol :
> "setloglevels"
> pkg
2023-02-06 17:17:46 +03:00
> level ( if empty , log level for pkg will be unset )
2023-01-30 16:27:06 +03:00
< "ok" or error
* /
pkg := ctl . xread ( )
levelstr := ctl . xread ( )
2023-02-06 17:17:46 +03:00
if levelstr == "" {
2024-03-18 10:50:42 +03:00
mox . Conf . LogLevelRemove ( log , pkg )
2023-02-06 17:17:46 +03:00
} else {
level , ok := mlog . Levels [ levelstr ]
if ! ok {
ctl . xerror ( "bad level" )
}
2024-03-18 10:50:42 +03:00
mox . Conf . LogLevelSet ( log , pkg , level )
2023-01-30 16:27:06 +03:00
}
ctl . xwriteok ( )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
case "retrain" :
/ * protocol :
> "retrain"
> account
< "ok" or error
* /
account := ctl . xread ( )
2024-03-18 10:50:42 +03:00
acc , err := store . OpenAccount ( log , account )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
ctl . xcheck ( err , "open account" )
2023-07-26 20:21:58 +03:00
defer func ( ) {
if acc != nil {
err := acc . Close ( )
log . Check ( err , "closing account after retraining" )
}
} ( )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
2024-12-07 18:53:53 +03:00
// todo: can we retrain an account without holding a write lock? perhaps by writing a junkfilter to a new location, and staying informed of message changes while we go through all messages in the account?
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
acc . WithWLock ( func ( ) {
conf , _ := acc . Conf ( )
if conf . JunkFilter == nil {
ctl . xcheck ( store . ErrNoJunkFilter , "looking for junk filter" )
}
// Remove existing junk filter files.
basePath := mox . DataDirPath ( "accounts" )
dbPath := filepath . Join ( basePath , acc . Name , "junkfilter.db" )
bloomPath := filepath . Join ( basePath , acc . Name , "junkfilter.bloom" )
2023-02-16 15:22:00 +03:00
err := os . Remove ( dbPath )
2023-12-05 15:35:58 +03:00
log . Check ( err , "removing old junkfilter database file" , slog . String ( "path" , dbPath ) )
2023-02-16 15:22:00 +03:00
err = os . Remove ( bloomPath )
2023-12-05 15:35:58 +03:00
log . Check ( err , "removing old junkfilter bloom filter file" , slog . String ( "path" , bloomPath ) )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
// Open junk filter, this creates new files.
2024-03-18 10:50:42 +03:00
jf , _ , err := acc . OpenJunkFilter ( ctx , log )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
ctl . xcheck ( err , "open new junk filter" )
defer func ( ) {
if jf == nil {
return
}
2023-02-16 15:22:00 +03:00
err := jf . Close ( )
log . Check ( err , "closing junk filter during cleanup" )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
} ( )
// Read through messages with junk or nonjunk flag set, and train them.
var total , trained int
2023-05-22 15:40:36 +03:00
q := bstore . QueryDB [ store . Message ] ( ctx , acc . DB )
2023-07-24 22:21:05 +03:00
q . FilterEqual ( "Expunged" , false )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
err = q . ForEach ( func ( m store . Message ) error {
total ++
2024-03-18 10:50:42 +03:00
ok , err := acc . TrainMessage ( ctx , log , jf , m )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
if ok {
trained ++
}
return err
} )
ctl . xcheck ( err , "training messages" )
2024-03-18 10:50:42 +03:00
log . Info ( "retrained messages" , slog . Int ( "total" , total ) , slog . Int ( "trained" , trained ) )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
// Close junk filter, marking success.
err = jf . Close ( )
jf = nil
ctl . xcheck ( err , "closing junk filter" )
} )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
ctl . xwriteok ( )
case "recalculatemailboxcounts" :
/ * protocol :
> "recalculatemailboxcounts"
> account
< "ok" or error
< stream
* /
account := ctl . xread ( )
2024-03-18 10:50:42 +03:00
acc , err := store . OpenAccount ( log , account )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
ctl . xcheck ( err , "open account" )
defer func ( ) {
if acc != nil {
err := acc . Close ( )
log . Check ( err , "closing account after recalculating mailbox counts" )
}
} ( )
ctl . xwriteok ( )
w := ctl . writer ( )
acc . WithWLock ( func ( ) {
var changes [ ] store . Change
err = acc . DB . Write ( ctx , func ( tx * bstore . Tx ) error {
2023-12-20 22:54:12 +03:00
var totalSize int64
err := bstore . QueryTx [ store . Mailbox ] ( tx ) . ForEach ( func ( mb store . Mailbox ) error {
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
mc , err := mb . CalculateCounts ( tx )
if err != nil {
return fmt . Errorf ( "calculating counts for mailbox %q: %w" , mb . Name , err )
}
2023-12-20 22:54:12 +03:00
totalSize += mc . Size
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
if ! mb . HaveCounts || mc != mb . MailboxCounts {
_ , err := fmt . Fprintf ( w , "for %s setting new counts %s (was %s)\n" , mb . Name , mc , mb . MailboxCounts )
ctl . xcheck ( err , "write" )
mb . HaveCounts = true
mb . MailboxCounts = mc
if err := tx . Update ( & mb ) ; err != nil {
return fmt . Errorf ( "storing new counts for %q: %v" , mb . Name , err )
}
changes = append ( changes , mb . ChangeCounts ( ) )
}
return nil
} )
2023-12-20 22:54:12 +03:00
if err != nil {
return err
}
du := store . DiskUsage { ID : 1 }
if err := tx . Get ( & du ) ; err != nil {
return fmt . Errorf ( "get disk usage: %v" , err )
}
if du . MessageSize != totalSize {
_ , err := fmt . Fprintf ( w , "setting new total message size %d (was %d)\n" , totalSize , du . MessageSize )
ctl . xcheck ( err , "write" )
du . MessageSize = totalSize
if err := tx . Update ( & du ) ; err != nil {
return fmt . Errorf ( "update disk usage: %v" , err )
}
}
return nil
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
} )
ctl . xcheck ( err , "write transaction for mailbox counts" )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
store . BroadcastChanges ( acc , changes )
} )
w . xclose ( )
2023-08-08 23:10:53 +03:00
case "fixmsgsize" :
/ * protocol :
> "fixmsgsize"
> account or empty
< "ok" or error
< stream
* /
accountOpt := ctl . xread ( )
ctl . xwriteok ( )
w := ctl . writer ( )
var foundProblem bool
const batchSize = 10000
xfixmsgsize := func ( accName string ) {
2024-03-18 10:50:42 +03:00
acc , err := store . OpenAccount ( log , accName )
2023-08-08 23:10:53 +03:00
ctl . xcheck ( err , "open account" )
defer func ( ) {
err := acc . Close ( )
log . Check ( err , "closing account after fixing message sizes" )
} ( )
total := 0
var lastID int64
for {
var n int
acc . WithRLock ( func ( ) {
mailboxCounts := map [ int64 ] store . Mailbox { } // For broadcasting.
// Don't process all message in one transaction, we could block the account for too long.
err := acc . DB . Write ( ctx , func ( tx * bstore . Tx ) error {
q := bstore . QueryTx [ store . Message ] ( tx )
q . FilterEqual ( "Expunged" , false )
q . FilterGreater ( "ID" , lastID )
q . Limit ( batchSize )
q . SortAsc ( "ID" )
return q . ForEach ( func ( m store . Message ) error {
lastID = m . ID
2023-10-05 23:59:53 +03:00
n ++
2023-08-08 23:10:53 +03:00
p := acc . MessagePath ( m . ID )
st , err := os . Stat ( p )
if err != nil {
mb := store . Mailbox { ID : m . MailboxID }
if xerr := tx . Get ( & mb ) ; xerr != nil {
_ , werr := fmt . Fprintf ( w , "get mailbox id %d for message with file error: %v\n" , mb . ID , xerr )
ctl . xcheck ( werr , "write" )
}
_ , werr := fmt . Fprintf ( w , "checking file %s for message %d in mailbox %q (id %d): %v (continuing)\n" , p , m . ID , mb . Name , mb . ID , err )
ctl . xcheck ( werr , "write" )
return nil
}
filesize := st . Size ( )
correctSize := int64 ( len ( m . MsgPrefix ) ) + filesize
if m . Size == correctSize {
return nil
}
foundProblem = true
mb := store . Mailbox { ID : m . MailboxID }
if err := tx . Get ( & mb ) ; err != nil {
_ , werr := fmt . Fprintf ( w , "get mailbox id %d for message with file size mismatch: %v\n" , mb . ID , err )
ctl . xcheck ( werr , "write" )
}
_ , err = fmt . Fprintf ( w , "fixing message %d in mailbox %q (id %d) with incorrect size %d, should be %d (len msg prefix %d + on-disk file %s size %d)\n" , m . ID , mb . Name , mb . ID , m . Size , correctSize , len ( m . MsgPrefix ) , p , filesize )
ctl . xcheck ( err , "write" )
// We assume that the original message size was accounted as stored in the mailbox
// total size. If this isn't correct, the user can always run
// recalculatemailboxcounts.
mb . Size -= m . Size
mb . Size += correctSize
if err := tx . Update ( & mb ) ; err != nil {
return fmt . Errorf ( "update mailbox counts: %v" , err )
}
mailboxCounts [ mb . ID ] = mb
m . Size = correctSize
mr := acc . MessageReader ( m )
2023-12-05 15:35:58 +03:00
part , err := message . EnsurePart ( log . Logger , false , mr , m . Size )
2023-08-08 23:10:53 +03:00
if err != nil {
_ , werr := fmt . Fprintf ( w , "parsing message %d again: %v (continuing)\n" , m . ID , err )
ctl . xcheck ( werr , "write" )
}
m . ParsedBuf , err = json . Marshal ( part )
if err != nil {
return fmt . Errorf ( "marshal parsed message: %v" , err )
}
total ++
if err := tx . Update ( & m ) ; err != nil {
return fmt . Errorf ( "update message: %v" , err )
}
return nil
} )
} )
ctl . xcheck ( err , "find and fix wrong message sizes" )
var changes [ ] store . Change
for _ , mb := range mailboxCounts {
changes = append ( changes , mb . ChangeCounts ( ) )
}
store . BroadcastChanges ( acc , changes )
} )
if n < batchSize {
break
}
}
_ , err = fmt . Fprintf ( w , "%d message size(s) fixed for account %s\n" , total , accName )
ctl . xcheck ( err , "write" )
}
if accountOpt != "" {
xfixmsgsize ( accountOpt )
} else {
for i , accName := range mox . Conf . Accounts ( ) {
var line string
if i > 0 {
line = "\n"
}
_ , err := fmt . Fprintf ( w , "%sFixing message sizes in account %s...\n" , line , accName )
ctl . xcheck ( err , "write" )
xfixmsgsize ( accName )
}
}
if foundProblem {
_ , err := fmt . Fprintf ( w , "\nProblems were found and fixed. You should invalidate messages stored at imap clients with the \"mox bumpuidvalidity account [mailbox]\" command.\n" )
ctl . xcheck ( err , "write" )
}
w . xclose ( )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
case "reparse" :
/ * protocol :
> "reparse"
> account or empty
< "ok" or error
< stream
* /
accountOpt := ctl . xread ( )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
ctl . xwriteok ( )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
w := ctl . writer ( )
2023-08-08 23:10:53 +03:00
const batchSize = 100
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
xreparseAccount := func ( accName string ) {
2024-03-18 10:50:42 +03:00
acc , err := store . OpenAccount ( log , accName )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
ctl . xcheck ( err , "open account" )
defer func ( ) {
err := acc . Close ( )
log . Check ( err , "closing account after reparsing messages" )
} ( )
total := 0
var lastID int64
for {
var n int
2023-08-08 23:10:53 +03:00
// Don't process all message in one transaction, we could block the account for too long.
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
err := acc . DB . Write ( ctx , func ( tx * bstore . Tx ) error {
q := bstore . QueryTx [ store . Message ] ( tx )
q . FilterEqual ( "Expunged" , false )
q . FilterGreater ( "ID" , lastID )
2023-08-08 23:10:53 +03:00
q . Limit ( batchSize )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
q . SortAsc ( "ID" )
return q . ForEach ( func ( m store . Message ) error {
lastID = m . ID
mr := acc . MessageReader ( m )
2023-12-05 15:35:58 +03:00
p , err := message . EnsurePart ( log . Logger , false , mr , m . Size )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
if err != nil {
_ , err := fmt . Fprintf ( w , "parsing message %d: %v (continuing)\n" , m . ID , err )
ctl . xcheck ( err , "write" )
}
m . ParsedBuf , err = json . Marshal ( p )
if err != nil {
return fmt . Errorf ( "marshal parsed message: %v" , err )
}
total ++
n ++
if err := tx . Update ( & m ) ; err != nil {
return fmt . Errorf ( "update message: %v" , err )
}
return nil
} )
} )
ctl . xcheck ( err , "update messages with parsed mime structure" )
2023-08-08 23:10:53 +03:00
if n < batchSize {
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
break
}
}
2023-08-08 23:10:53 +03:00
_ , err = fmt . Fprintf ( w , "%d message(s) reparsed for account %s\n" , total , accName )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
ctl . xcheck ( err , "write" )
}
if accountOpt != "" {
xreparseAccount ( accountOpt )
} else {
for i , accName := range mox . Conf . Accounts ( ) {
var line string
if i > 0 {
line = "\n"
}
2023-08-08 23:10:53 +03:00
_ , err := fmt . Fprintf ( w , "%sReparsing account %s...\n" , line , accName )
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
ctl . xcheck ( err , "write" )
xreparseAccount ( accName )
}
}
w . xclose ( )
improve training of junk filter
before, we used heuristics to decide when to train/untrain a message as junk or
nonjunk: the message had to be seen, be in certain mailboxes. then if a message
was marked as junk, it was junk. and otherwise it was nonjunk. this wasn't good
enough: you may want to keep some messages around as neither junk or nonjunk.
and that wasn't possible.
ideally, we would just look at the imap $Junk and $NotJunk flags. the problem
is that mail clients don't set these flags, or don't make it easy. thunderbird
can set the flags based on its own bayesian filter. it has a shortcut for
marking Junk and moving it to the junk folder (good), but the counterpart of
notjunk only marks a message as notjunk without showing in the UI that it was
marked as notjunk. there is also no "move and mark as notjunk" mechanism. e.g.
"archive" does not mark a message as notjunk. ios mail and mutt don't appear to
have any way to see or change the $Junk and $NotJunk flags.
what email clients do have is the ability to move messages to other
mailboxes/folders. so mox now has a mechanism that allows you to configure
mailboxes that automatically set $Junk or $NotJunk (or clear both) when a
message is moved/copied/delivered to that folder. e.g. a mailbox called junk or
spam or rejects marks its messags as junk. inbox, postmaster, dmarc, tlsrpt,
neutral* mark their messages as neither junk or notjunk. other folders mark
their messages as notjunk. e.g. list/*, archive. this functionality is
optional, but enabled with the quickstart and for new accounts.
also, mox now keeps track of the previous training of a message and will only
untrain/train if needed. before, there probably have been duplicate or missing
(un)trainings.
this also includes a new subcommand "retrain" to recreate the junkfilter for an
account. you should run it after updating to this version. and you should
probably also modify your account config to include the AutomaticJunkFlags.
2023-02-12 01:00:12 +03:00
implement message threading in backend and webmail
we match messages to their parents based on the "references" and "in-reply-to"
headers (requiring the same base subject), and in absense of those headers we
also by only base subject (against messages received max 4 weeks ago).
we store a threadid with messages. all messages in a thread have the same
threadid. messages also have a "thread parent ids", which holds all id's of
parent messages up to the thread root. then there is "thread missing link",
which is set when a referenced immediate parent wasn't found (but possibly
earlier ancestors can still be found and will be in thread parent ids".
threads can be muted: newly delivered messages are automatically marked as
read/seen. threads can be marked as collapsed: if set, the webmail collapses
the thread to a single item in the basic threading view (default is to expand
threads). the muted and collapsed fields are copied from their parent on
message delivery.
the threading is implemented in the webmail. the non-threading mode still works
as before. the new default threading mode "unread" automatically expands only
the threads with at least one unread (not seen) meessage. the basic threading
mode "on" expands all threads except when explicitly collapsed (as saved in the
thread collapsed field). new shortcuts for navigation/interaction threads have
been added, e.g. go to previous/next thread root, toggle collapse/expand of
thread (or double click), toggle mute of thread. some previous shortcuts have
changed, see the help for details.
the message threading are added with an explicit account upgrade step,
automatically started when an account is opened. the upgrade is done in the
background because it will take too long for large mailboxes to block account
operations. the upgrade takes two steps: 1. updating all message records in the
database to add a normalized message-id and thread base subject (with "re:",
"fwd:" and several other schemes stripped). 2. going through all messages in
the database again, reading the "references" and "in-reply-to" headers from
disk, and matching against their parents. this second step is also done at the
end of each import of mbox/maildir mailboxes. new deliveries are matched
immediately against other existing messages, currently no attempt is made to
rematch previously delivered messages (which could be useful for related
messages being delivered out of order).
the threading is not yet exposed over imap.
2023-09-13 09:51:50 +03:00
case "reassignthreads" :
/ * protocol :
> "reassignthreads"
> account or empty
< "ok" or error
< stream
* /
accountOpt := ctl . xread ( )
ctl . xwriteok ( )
w := ctl . writer ( )
xreassignThreads := func ( accName string ) {
2024-03-18 10:50:42 +03:00
acc , err := store . OpenAccount ( log , accName )
implement message threading in backend and webmail
we match messages to their parents based on the "references" and "in-reply-to"
headers (requiring the same base subject), and in absense of those headers we
also by only base subject (against messages received max 4 weeks ago).
we store a threadid with messages. all messages in a thread have the same
threadid. messages also have a "thread parent ids", which holds all id's of
parent messages up to the thread root. then there is "thread missing link",
which is set when a referenced immediate parent wasn't found (but possibly
earlier ancestors can still be found and will be in thread parent ids".
threads can be muted: newly delivered messages are automatically marked as
read/seen. threads can be marked as collapsed: if set, the webmail collapses
the thread to a single item in the basic threading view (default is to expand
threads). the muted and collapsed fields are copied from their parent on
message delivery.
the threading is implemented in the webmail. the non-threading mode still works
as before. the new default threading mode "unread" automatically expands only
the threads with at least one unread (not seen) meessage. the basic threading
mode "on" expands all threads except when explicitly collapsed (as saved in the
thread collapsed field). new shortcuts for navigation/interaction threads have
been added, e.g. go to previous/next thread root, toggle collapse/expand of
thread (or double click), toggle mute of thread. some previous shortcuts have
changed, see the help for details.
the message threading are added with an explicit account upgrade step,
automatically started when an account is opened. the upgrade is done in the
background because it will take too long for large mailboxes to block account
operations. the upgrade takes two steps: 1. updating all message records in the
database to add a normalized message-id and thread base subject (with "re:",
"fwd:" and several other schemes stripped). 2. going through all messages in
the database again, reading the "references" and "in-reply-to" headers from
disk, and matching against their parents. this second step is also done at the
end of each import of mbox/maildir mailboxes. new deliveries are matched
immediately against other existing messages, currently no attempt is made to
rematch previously delivered messages (which could be useful for related
messages being delivered out of order).
the threading is not yet exposed over imap.
2023-09-13 09:51:50 +03:00
ctl . xcheck ( err , "open account" )
defer func ( ) {
err := acc . Close ( )
log . Check ( err , "closing account after reassigning threads" )
} ( )
// We don't want to step on an existing upgrade process.
2024-03-18 10:50:42 +03:00
err = acc . ThreadingWait ( log )
implement message threading in backend and webmail
we match messages to their parents based on the "references" and "in-reply-to"
headers (requiring the same base subject), and in absense of those headers we
also by only base subject (against messages received max 4 weeks ago).
we store a threadid with messages. all messages in a thread have the same
threadid. messages also have a "thread parent ids", which holds all id's of
parent messages up to the thread root. then there is "thread missing link",
which is set when a referenced immediate parent wasn't found (but possibly
earlier ancestors can still be found and will be in thread parent ids".
threads can be muted: newly delivered messages are automatically marked as
read/seen. threads can be marked as collapsed: if set, the webmail collapses
the thread to a single item in the basic threading view (default is to expand
threads). the muted and collapsed fields are copied from their parent on
message delivery.
the threading is implemented in the webmail. the non-threading mode still works
as before. the new default threading mode "unread" automatically expands only
the threads with at least one unread (not seen) meessage. the basic threading
mode "on" expands all threads except when explicitly collapsed (as saved in the
thread collapsed field). new shortcuts for navigation/interaction threads have
been added, e.g. go to previous/next thread root, toggle collapse/expand of
thread (or double click), toggle mute of thread. some previous shortcuts have
changed, see the help for details.
the message threading are added with an explicit account upgrade step,
automatically started when an account is opened. the upgrade is done in the
background because it will take too long for large mailboxes to block account
operations. the upgrade takes two steps: 1. updating all message records in the
database to add a normalized message-id and thread base subject (with "re:",
"fwd:" and several other schemes stripped). 2. going through all messages in
the database again, reading the "references" and "in-reply-to" headers from
disk, and matching against their parents. this second step is also done at the
end of each import of mbox/maildir mailboxes. new deliveries are matched
immediately against other existing messages, currently no attempt is made to
rematch previously delivered messages (which could be useful for related
messages being delivered out of order).
the threading is not yet exposed over imap.
2023-09-13 09:51:50 +03:00
ctl . xcheck ( err , "waiting for threading upgrade to finish" )
// todo: should we try to continue if the threading upgrade failed? only if there is a chance it will succeed this time...
// todo: reassigning isn't atomic (in a single transaction), ideally it would be (bstore would need to be able to handle large updates).
const batchSize = 50000
2024-03-18 10:50:42 +03:00
total , err := acc . ResetThreading ( ctx , log , batchSize , true )
implement message threading in backend and webmail
we match messages to their parents based on the "references" and "in-reply-to"
headers (requiring the same base subject), and in absense of those headers we
also by only base subject (against messages received max 4 weeks ago).
we store a threadid with messages. all messages in a thread have the same
threadid. messages also have a "thread parent ids", which holds all id's of
parent messages up to the thread root. then there is "thread missing link",
which is set when a referenced immediate parent wasn't found (but possibly
earlier ancestors can still be found and will be in thread parent ids".
threads can be muted: newly delivered messages are automatically marked as
read/seen. threads can be marked as collapsed: if set, the webmail collapses
the thread to a single item in the basic threading view (default is to expand
threads). the muted and collapsed fields are copied from their parent on
message delivery.
the threading is implemented in the webmail. the non-threading mode still works
as before. the new default threading mode "unread" automatically expands only
the threads with at least one unread (not seen) meessage. the basic threading
mode "on" expands all threads except when explicitly collapsed (as saved in the
thread collapsed field). new shortcuts for navigation/interaction threads have
been added, e.g. go to previous/next thread root, toggle collapse/expand of
thread (or double click), toggle mute of thread. some previous shortcuts have
changed, see the help for details.
the message threading are added with an explicit account upgrade step,
automatically started when an account is opened. the upgrade is done in the
background because it will take too long for large mailboxes to block account
operations. the upgrade takes two steps: 1. updating all message records in the
database to add a normalized message-id and thread base subject (with "re:",
"fwd:" and several other schemes stripped). 2. going through all messages in
the database again, reading the "references" and "in-reply-to" headers from
disk, and matching against their parents. this second step is also done at the
end of each import of mbox/maildir mailboxes. new deliveries are matched
immediately against other existing messages, currently no attempt is made to
rematch previously delivered messages (which could be useful for related
messages being delivered out of order).
the threading is not yet exposed over imap.
2023-09-13 09:51:50 +03:00
ctl . xcheck ( err , "resetting threading fields" )
_ , err = fmt . Fprintf ( w , "New thread base subject assigned to %d message(s), starting to reassign threads...\n" , total )
ctl . xcheck ( err , "write" )
// Assign threads again. Ideally we would do this in a single transaction, but
// bstore/boltdb cannot handle so many pending changes, so we set a high batchsize.
2024-03-18 10:50:42 +03:00
err = acc . AssignThreads ( ctx , log , nil , 0 , 50000 , w )
implement message threading in backend and webmail
we match messages to their parents based on the "references" and "in-reply-to"
headers (requiring the same base subject), and in absense of those headers we
also by only base subject (against messages received max 4 weeks ago).
we store a threadid with messages. all messages in a thread have the same
threadid. messages also have a "thread parent ids", which holds all id's of
parent messages up to the thread root. then there is "thread missing link",
which is set when a referenced immediate parent wasn't found (but possibly
earlier ancestors can still be found and will be in thread parent ids".
threads can be muted: newly delivered messages are automatically marked as
read/seen. threads can be marked as collapsed: if set, the webmail collapses
the thread to a single item in the basic threading view (default is to expand
threads). the muted and collapsed fields are copied from their parent on
message delivery.
the threading is implemented in the webmail. the non-threading mode still works
as before. the new default threading mode "unread" automatically expands only
the threads with at least one unread (not seen) meessage. the basic threading
mode "on" expands all threads except when explicitly collapsed (as saved in the
thread collapsed field). new shortcuts for navigation/interaction threads have
been added, e.g. go to previous/next thread root, toggle collapse/expand of
thread (or double click), toggle mute of thread. some previous shortcuts have
changed, see the help for details.
the message threading are added with an explicit account upgrade step,
automatically started when an account is opened. the upgrade is done in the
background because it will take too long for large mailboxes to block account
operations. the upgrade takes two steps: 1. updating all message records in the
database to add a normalized message-id and thread base subject (with "re:",
"fwd:" and several other schemes stripped). 2. going through all messages in
the database again, reading the "references" and "in-reply-to" headers from
disk, and matching against their parents. this second step is also done at the
end of each import of mbox/maildir mailboxes. new deliveries are matched
immediately against other existing messages, currently no attempt is made to
rematch previously delivered messages (which could be useful for related
messages being delivered out of order).
the threading is not yet exposed over imap.
2023-09-13 09:51:50 +03:00
ctl . xcheck ( err , "reassign threads" )
_ , err = fmt . Fprintf ( w , "Threads reassigned. You should invalidate messages stored at imap clients with the \"mox bumpuidvalidity account [mailbox]\" command.\n" )
ctl . xcheck ( err , "write" )
}
if accountOpt != "" {
xreassignThreads ( accountOpt )
} else {
for i , accName := range mox . Conf . Accounts ( ) {
var line string
if i > 0 {
line = "\n"
}
_ , err := fmt . Fprintf ( w , "%sReassigning threads for account %s...\n" , line , accName )
ctl . xcheck ( err , "write" )
xreassignThreads ( accName )
}
}
w . xclose ( )
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.
before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.
the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.
inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 20:26:51 +03:00
case "backup" :
backupctl ( ctx , ctl )
2023-01-30 16:27:06 +03:00
default :
2023-12-05 15:35:58 +03:00
log . Info ( "unrecognized command" , slog . String ( "cmd" , cmd ) )
2023-01-30 16:27:06 +03:00
ctl . xwrite ( "unrecognized command" )
return
}
}