diff --git a/README.md b/README.md index 803172f..ef955ae 100644 --- a/README.md +++ b/README.md @@ -33,6 +33,8 @@ See Quickstart below to get started. support is limited). - Webserver with serving static files and forwarding requests (reverse proxy), so port 443 can also be used to serve websites. +- Simple HTTP/JSON API for sending transaction email and receiving delivery + events and incoming messages (webapi and webhooks). - Prometheus metrics and structured logging for operational insight. - "mox localserve" subcommand for running mox locally for email-related testing/developing, including pedantic mode. @@ -133,12 +135,13 @@ https://nlnet.nl/project/Mox/. ## Roadmap +- Aliases, for delivering to multiple local accounts. - Webmail improvements -- HTTP-based API for sending messages and receiving delivery feedback - Calendaring with CalDAV/iCal - More IMAP extensions (PREVIEW, WITHIN, IMPORTANT, COMPRESS=DEFLATE, CREATE-SPECIAL-USE, SAVEDATE, UNAUTHENTICATE, REPLACE, QUOTA, NOTIFY, MULTIAPPEND, OBJECTID, MULTISEARCH, THREAD, SORT) +- SMTP DSN extension - ARC, with forwarded email from trusted source - Forwarding (to an external address) - Add special IMAP mailbox ("Queue?") that contains queued but @@ -447,6 +450,23 @@ messages, for example by replacing your Message-Id header and thereby invalidating your DKIM-signatures, or rejecting messages with more than one DKIM-signature. +## Can I use mox to send transactional email? + +Yes. While you can use SMTP submission to send messages you've composed +yourself, and monitor a mailbox for DSNs, a more convenient option is to use +the mox HTTP/JSON-based webapi and webhooks. + +The mox webapi can be used to send outgoing messages that mox composes. The web +api can also be used to deal with messages stored in an account, like changing +message flags, retrieving messages in parsed form or individual parts of +multipart messages, or moving messages to another mailbox or deleting messages +altogether. + +Mox webhooks can be used to receive updates about incoming and outgoing +deliveries. Mox can automatically manage per account suppression lists. + +See https://www.xmox.nl/features/#hdr-webapi-and-webhooks for details. + ## Can I use existing TLS certificates/keys? Yes. The quickstart command creates a config that uses ACME with Let's Encrypt, diff --git a/apidiff/next.txt b/apidiff/next.txt index 43f41e3..f2633af 100644 --- a/apidiff/next.txt +++ b/apidiff/next.txt @@ -13,6 +13,7 @@ Below are the incompatible changes between v0.0.10 and next, per package. # iprev # message +- (*Composer).TextPart: changed from func(string) ([]byte, string, string) to func(string, string) ([]byte, string, string) - From: changed from func(*log/slog.Logger, bool, io.ReaderAt) (github.com/mjl-/mox/smtp.Address, *Envelope, net/textproto.MIMEHeader, error) to func(*log/slog.Logger, bool, io.ReaderAt, *Part) (github.com/mjl-/mox/smtp.Address, *Envelope, net/textproto.MIMEHeader, error) - NewComposer: changed from func(io.Writer, int64) *Composer to func(io.Writer, int64, bool) *Composer diff --git a/apidiff/packages.txt b/apidiff/packages.txt index d21fddd..a9ac1f5 100644 --- a/apidiff/packages.txt +++ b/apidiff/packages.txt @@ -16,3 +16,5 @@ spf subjectpass tlsrpt updates +webapi +webhook diff --git a/config/config.go b/config/config.go index 15a7226..5525692 100644 --- a/config/config.go +++ b/config/config.go @@ -182,6 +182,8 @@ type Listener struct { AdminHTTPS WebService `sconf:"optional" sconf-doc:"Admin web interface listener like AdminHTTP, but for HTTPS. Requires a TLS config."` WebmailHTTP WebService `sconf:"optional" sconf-doc:"Webmail client, for reading email. Default path is /webmail/."` WebmailHTTPS WebService `sconf:"optional" sconf-doc:"Webmail client, like WebmailHTTP, but for HTTPS. Requires a TLS config."` + WebAPIHTTP WebService `sconf:"optional" sconf-doc:"Like WebAPIHTTP, but with plain HTTP, without TLS."` + WebAPIHTTPS WebService `sconf:"optional" sconf-doc:"WebAPI, a simple HTTP/JSON-based API for email, with HTTPS (requires a TLS config). Default path is /webapi/."` MetricsHTTP struct { Enabled bool Port int `sconf:"optional" sconf-doc:"Default 8010."` @@ -210,7 +212,7 @@ type Listener struct { } `sconf:"optional" sconf-doc:"All configured WebHandlers will serve on an enabled listener. Either ACME must be configured, or for each WebHandler domain a TLS certificate must be configured."` } -// WebService is an internal web interface: webmail, account, admin. +// WebService is an internal web interface: webmail, webaccount, webadmin, webapi. type WebService struct { Enabled bool Port int `sconf:"optional" sconf-doc:"Default 80 for HTTP and 443 for HTTPS."` @@ -356,6 +358,19 @@ type Route struct { // todo: move RejectsMailbox to store.Mailbox.SpecialUse, possibly with "X" prefix? +// note: outgoing hook events are in ../queue/hooks.go, ../mox-/config.go, ../queue.go and ../webapi/gendoc.sh. keep in sync. + +type OutgoingWebhook struct { + URL string `sconf-doc:"URL to POST webhooks."` + Authorization string `sconf:"optional" sconf-doc:"If not empty, value of Authorization header to add to HTTP requests."` + Events []string `sconf:"optional" sconf-doc:"Events to send outgoing delivery notifications for. If absent, all events are sent. Valid values: delivered, suppressed, delayed, failed, relayed, expanded, canceled, unrecognized."` +} + +type IncomingWebhook struct { + URL string `sconf-doc:"URL to POST webhooks to for incoming deliveries over SMTP."` + Authorization string `sconf:"optional" sconf-doc:"If not empty, value of Authorization header to add to HTTP requests."` +} + type SubjectPass struct { Period time.Duration `sconf-doc:"How long unique values are accepted after generating, e.g. 12h."` // todo: have a reasonable default for this? } @@ -368,6 +383,12 @@ type AutomaticJunkFlags struct { } type Account struct { + OutgoingWebhook *OutgoingWebhook `sconf:"optional" sconf-doc:"Webhooks for events about outgoing deliveries."` + IncomingWebhook *IncomingWebhook `sconf:"optional" sconf-doc:"Webhooks for events about incoming deliveries over SMTP."` + FromIDLoginAddresses []string `sconf:"optional" sconf-doc:"Login addresses that cause outgoing email to be sent with SMTP MAIL FROM addresses with a unique id after the localpart catchall separator (which must be enabled when addresses are specified here). Any delivery status notifications (DSN, e.g. for bounces), can be related to the original message and recipient with unique id's. You can login to an account with any valid email address, including variants with the localpart catchall separator. You can use this mechanism to both send outgoing messages both with and without unique fromid for a given address."` + KeepRetiredMessagePeriod time.Duration `sconf:"optional" sconf-doc:"Period to keep messages retired from the queue (delivered or failed) around. Keeping retired messages is useful for maintaining the suppression list for transactional email, for matching incoming DSNs to sent messages, and for debugging. The time at which to clean up (remove) is calculated at retire time. E.g. 168h (1 week)."` + KeepRetiredWebhookPeriod time.Duration `sconf:"optional" sconf-doc:"Period to keep webhooks retired from the queue (delivered or failed) around. Useful for debugging. The time at which to clean up (remove) is calculated at retire time. E.g. 168h (1 week)."` + Domain string `sconf-doc:"Default domain for account. Deprecated behaviour: If a destination is not a full address but only a localpart, this domain is added to form a full address."` Description string `sconf:"optional" sconf-doc:"Free form description, e.g. full name or alternative contact info."` FullName string `sconf:"optional" sconf-doc:"Full name, to use in message From header when composing messages in webmail. Can be overridden per destination."` @@ -383,10 +404,11 @@ type Account struct { NoFirstTimeSenderDelay bool `sconf:"optional" sconf-doc:"Do not apply a delay to SMTP connections before accepting an incoming message from a first-time sender. Can be useful for accounts that sends automated responses and want instant replies."` Routes []Route `sconf:"optional" sconf-doc:"Routes for delivering outgoing messages through the queue. Each delivery attempt evaluates these account routes, domain routes and finally global routes. The transport of the first matching route is used in the delivery attempt. If no routes match, which is the default with no configured routes, messages are delivered directly from the queue."` - DNSDomain dns.Domain `sconf:"-"` // Parsed form of Domain. - JunkMailbox *regexp.Regexp `sconf:"-" json:"-"` - NeutralMailbox *regexp.Regexp `sconf:"-" json:"-"` - NotJunkMailbox *regexp.Regexp `sconf:"-" json:"-"` + DNSDomain dns.Domain `sconf:"-"` // Parsed form of Domain. + JunkMailbox *regexp.Regexp `sconf:"-" json:"-"` + NeutralMailbox *regexp.Regexp `sconf:"-" json:"-"` + NotJunkMailbox *regexp.Regexp `sconf:"-" json:"-"` + ParsedFromIDLoginAddresses []smtp.Address `sconf:"-" json:"-"` } type JunkFilter struct { diff --git a/config/doc.go b/config/doc.go index 61462a0..d3dfb75 100644 --- a/config/doc.go +++ b/config/doc.go @@ -386,6 +386,35 @@ See https://pkg.go.dev/github.com/mjl-/sconf for details. # limiting and for the "secure" status of cookies. (optional) Forwarded: false + # Like WebAPIHTTP, but with plain HTTP, without TLS. (optional) + WebAPIHTTP: + Enabled: false + + # Default 80 for HTTP and 443 for HTTPS. (optional) + Port: 0 + + # Path to serve requests on. (optional) + Path: + + # If set, X-Forwarded-* headers are used for the remote IP address for rate + # limiting and for the "secure" status of cookies. (optional) + Forwarded: false + + # WebAPI, a simple HTTP/JSON-based API for email, with HTTPS (requires a TLS + # config). Default path is /webapi/. (optional) + WebAPIHTTPS: + Enabled: false + + # Default 80 for HTTP and 443 for HTTPS. (optional) + Port: 0 + + # Path to serve requests on. (optional) + Path: + + # If set, X-Forwarded-* headers are used for the remote IP address for rate + # limiting and for the "secure" status of cookies. (optional) + Forwarded: false + # Serve prometheus metrics, for monitoring. You should not enable this on a public # IP. (optional) MetricsHTTP: @@ -855,6 +884,53 @@ See https://pkg.go.dev/github.com/mjl-/sconf for details. Accounts: x: + # Webhooks for events about outgoing deliveries. (optional) + OutgoingWebhook: + + # URL to POST webhooks. + URL: + + # If not empty, value of Authorization header to add to HTTP requests. (optional) + Authorization: + + # Events to send outgoing delivery notifications for. If absent, all events are + # sent. Valid values: delivered, suppressed, delayed, failed, relayed, expanded, + # canceled, unrecognized. (optional) + Events: + - + + # Webhooks for events about incoming deliveries over SMTP. (optional) + IncomingWebhook: + + # URL to POST webhooks to for incoming deliveries over SMTP. + URL: + + # If not empty, value of Authorization header to add to HTTP requests. (optional) + Authorization: + + # Login addresses that cause outgoing email to be sent with SMTP MAIL FROM + # addresses with a unique id after the localpart catchall separator (which must be + # enabled when addresses are specified here). Any delivery status notifications + # (DSN, e.g. for bounces), can be related to the original message and recipient + # with unique id's. You can login to an account with any valid email address, + # including variants with the localpart catchall separator. You can use this + # mechanism to both send outgoing messages both with and without unique fromid for + # a given address. (optional) + FromIDLoginAddresses: + - + + # Period to keep messages retired from the queue (delivered or failed) around. + # Keeping retired messages is useful for maintaining the suppression list for + # transactional email, for matching incoming DSNs to sent messages, and for + # debugging. The time at which to clean up (remove) is calculated at retire time. + # E.g. 168h (1 week). (optional) + KeepRetiredMessagePeriod: 0s + + # Period to keep webhooks retired from the queue (delivered or failed) around. + # Useful for debugging. The time at which to clean up (remove) is calculated at + # retire time. E.g. 168h (1 week). (optional) + KeepRetiredWebhookPeriod: 0s + # Default domain for account. Deprecated behaviour: If a destination is not a full # address but only a localpart, this domain is added to form a full address. Domain: @@ -1233,8 +1309,8 @@ See https://pkg.go.dev/github.com/mjl-/sconf for details. # Examples Mox includes configuration files to illustrate common setups. You can see these -examples with "mox example", and print a specific example with "mox example -". Below are all examples included in mox. +examples with "mox config example", and print a specific example with "mox +config example ". Below are all examples included in mox. # Example webhandlers diff --git a/ctl.go b/ctl.go index 15e50a5..ecb8731 100644 --- a/ctl.go +++ b/ctl.go @@ -4,6 +4,7 @@ import ( "bufio" "context" "encoding/json" + "errors" "fmt" "io" "log" @@ -27,6 +28,7 @@ import ( "github.com/mjl-/mox/queue" "github.com/mjl-/mox/smtp" "github.com/mjl-/mox/store" + "github.com/mjl-/mox/webapi" ) // ctl represents a connection to the ctl unix domain socket of a running mox instance. @@ -294,12 +296,11 @@ func servectl(ctx context.Context, log mlog.Log, conn net.Conn, shutdown func()) } } -func xparseFilters(ctl *ctl, s string) (f queue.Filter) { +func xparseJSON(ctl *ctl, s string, v any) { dec := json.NewDecoder(strings.NewReader(s)) dec.DisallowUnknownFields() - err := dec.Decode(&f) - ctl.xcheck(err, "parsing filters") - return f + err := dec.Decode(v) + ctl.xcheck(err, "parsing from ctl as json") } func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { @@ -447,14 +448,17 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { case "queuelist": /* protocol: - > "queue" - > queuefilters as json + > "queuelist" + > filters as json + > sort as json < "ok" < stream */ - fs := ctl.xread() - f := xparseFilters(ctl, fs) - qmsgs, err := queue.List(ctx, f) + var f queue.Filter + xparseJSON(ctl, ctl.xread(), &f) + var s queue.Sort + xparseJSON(ctl, ctl.xread(), &s) + qmsgs, err := queue.List(ctx, f, s) ctl.xcheck(err, "listing queue") ctl.xwriteok() @@ -465,7 +469,7 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { if qm.LastAttempt != nil { lastAttempt = time.Since(*qm.LastAttempt).Round(time.Second).String() } - fmt.Fprintf(xw, "%5d %s from:%s to:%s next %s last %s error %q\n", qm.ID, qm.Queued.Format(time.RFC3339), qm.Sender().LogString(), qm.Recipient().LogString(), -time.Since(qm.NextAttempt).Round(time.Second), lastAttempt, qm.LastError) + fmt.Fprintf(xw, "%5d %s from:%s to:%s next %s last %s error %q\n", qm.ID, qm.Queued.Format(time.RFC3339), qm.Sender().LogString(), qm.Recipient().LogString(), -time.Since(qm.NextAttempt).Round(time.Second), lastAttempt, qm.LastResult().Error) } if len(qmsgs) == 0 { fmt.Fprint(xw, "(none)\n") @@ -481,8 +485,8 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { < count */ - fs := ctl.xread() - f := xparseFilters(ctl, fs) + var f queue.Filter + xparseJSON(ctl, ctl.xread(), &f) hold := ctl.xread() == "true" count, err := queue.HoldSet(ctx, f, hold) ctl.xcheck(err, "setting on hold status for messages") @@ -499,8 +503,8 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { < count */ - fs := ctl.xread() - f := xparseFilters(ctl, fs) + var f queue.Filter + xparseJSON(ctl, ctl.xread(), &f) relnow := ctl.xread() d, err := time.ParseDuration(ctl.xread()) ctl.xcheck(err, "parsing duration for next delivery attempt") @@ -523,8 +527,8 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { < count */ - fs := ctl.xread() - f := xparseFilters(ctl, fs) + var f queue.Filter + xparseJSON(ctl, ctl.xread(), &f) transport := ctl.xread() count, err := queue.TransportSet(ctx, f, transport) ctl.xcheck(err, "adding to next delivery attempts in queue") @@ -540,8 +544,8 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { < count */ - fs := ctl.xread() - f := xparseFilters(ctl, fs) + var f queue.Filter + xparseJSON(ctl, ctl.xread(), &f) reqtls := ctl.xread() var req *bool switch reqtls { @@ -568,8 +572,8 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { < count */ - fs := ctl.xread() - f := xparseFilters(ctl, fs) + var f queue.Filter + xparseJSON(ctl, ctl.xread(), &f) count, err := queue.Fail(ctx, log, f) ctl.xcheck(err, "marking messages from queue as failed") ctl.xwriteok() @@ -583,8 +587,8 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { < count */ - fs := ctl.xread() - f := xparseFilters(ctl, fs) + var f queue.Filter + xparseJSON(ctl, ctl.xread(), &f) count, err := queue.Drop(ctx, log, f) ctl.xcheck(err, "dropping messages from queue") ctl.xwriteok() @@ -612,6 +616,325 @@ func servectlcmd(ctx context.Context, ctl *ctl, shutdown func()) { ctl.xwriteok() ctl.xstreamfrom(mr) + case "queueretiredlist": + /* protocol: + > "queueretiredlist" + > filters as json + > sort as json + < "ok" + < stream + */ + var f queue.RetiredFilter + xparseJSON(ctl, ctl.xread(), &f) + var s queue.RetiredSort + xparseJSON(ctl, ctl.xread(), &s) + qmsgs, err := queue.RetiredList(ctx, f, s) + ctl.xcheck(err, "listing retired queue") + ctl.xwriteok() + + xw := ctl.writer() + fmt.Fprintln(xw, "retired messages:") + for _, qm := range qmsgs { + var lastAttempt string + if qm.LastAttempt != nil { + lastAttempt = time.Since(*qm.LastAttempt).Round(time.Second).String() + } + result := "failure" + if qm.Success { + result = "success" + } + sender, err := qm.Sender() + xcheckf(err, "parsing sender") + fmt.Fprintf(xw, "%5d %s %s from:%s to:%s last %s error %q\n", qm.ID, qm.Queued.Format(time.RFC3339), result, sender.LogString(), qm.Recipient().LogString(), lastAttempt, qm.LastResult().Error) + } + if len(qmsgs) == 0 { + fmt.Fprint(xw, "(none)\n") + } + xw.xclose() + + case "queueretiredprint": + /* protocol: + > "queueretiredprint" + > id + < "ok" + < stream + */ + idstr := ctl.xread() + id, err := strconv.ParseInt(idstr, 10, 64) + if err != nil { + ctl.xcheck(err, "parsing id") + } + l, err := queue.RetiredList(ctx, queue.RetiredFilter{IDs: []int64{id}}, queue.RetiredSort{}) + ctl.xcheck(err, "getting retired messages") + if len(l) == 0 { + ctl.xcheck(errors.New("not found"), "getting retired message") + } + m := l[0] + ctl.xwriteok() + xw := ctl.writer() + enc := json.NewEncoder(xw) + enc.SetIndent("", "\t") + err = enc.Encode(m) + ctl.xcheck(err, "encode retired message") + xw.xclose() + + case "queuehooklist": + /* protocol: + > "queuehooklist" + > filters as json + > sort as json + < "ok" + < stream + */ + var f queue.HookFilter + xparseJSON(ctl, ctl.xread(), &f) + var s queue.HookSort + xparseJSON(ctl, ctl.xread(), &s) + hooks, err := queue.HookList(ctx, f, s) + ctl.xcheck(err, "listing webhooks") + ctl.xwriteok() + + xw := ctl.writer() + fmt.Fprintln(xw, "webhooks:") + for _, h := range hooks { + var lastAttempt string + if len(h.Results) > 0 { + lastAttempt = time.Since(h.LastResult().Start).Round(time.Second).String() + } + fmt.Fprintf(xw, "%5d %s account:%s next %s last %s error %q url %s\n", h.ID, h.Submitted.Format(time.RFC3339), h.Account, time.Until(h.NextAttempt).Round(time.Second), lastAttempt, h.LastResult().Error, h.URL) + } + if len(hooks) == 0 { + fmt.Fprint(xw, "(none)\n") + } + xw.xclose() + + case "queuehookschedule": + /* protocol: + > "queuehookschedule" + > hookfilters as json + > relative to now + > duration + < "ok" or error + < count + */ + + var f queue.HookFilter + xparseJSON(ctl, ctl.xread(), &f) + relnow := ctl.xread() + d, err := time.ParseDuration(ctl.xread()) + ctl.xcheck(err, "parsing duration for next delivery attempt") + var count int + if relnow == "" { + count, err = queue.HookNextAttemptAdd(ctx, f, d) + } else { + count, err = queue.HookNextAttemptSet(ctx, f, time.Now().Add(d)) + } + ctl.xcheck(err, "setting next delivery attempts in queue") + ctl.xwriteok() + ctl.xwrite(fmt.Sprintf("%d", count)) + + case "queuehookcancel": + /* protocol: + > "queuehookcancel" + > hookfilters as json + < "ok" or error + < count + */ + + var f queue.HookFilter + xparseJSON(ctl, ctl.xread(), &f) + count, err := queue.HookCancel(ctx, log, f) + ctl.xcheck(err, "canceling webhooks in queue") + ctl.xwriteok() + ctl.xwrite(fmt.Sprintf("%d", count)) + + case "queuehookprint": + /* protocol: + > "queuehookprint" + > id + < "ok" + < stream + */ + idstr := ctl.xread() + id, err := strconv.ParseInt(idstr, 10, 64) + if err != nil { + ctl.xcheck(err, "parsing id") + } + l, err := queue.HookList(ctx, queue.HookFilter{IDs: []int64{id}}, queue.HookSort{}) + ctl.xcheck(err, "getting webhooks") + if len(l) == 0 { + ctl.xcheck(errors.New("not found"), "getting webhook") + } + h := l[0] + ctl.xwriteok() + xw := ctl.writer() + enc := json.NewEncoder(xw) + enc.SetIndent("", "\t") + err = enc.Encode(h) + ctl.xcheck(err, "encode webhook") + xw.xclose() + + case "queuehookretiredlist": + /* protocol: + > "queuehookretiredlist" + > filters as json + > sort as json + < "ok" + < stream + */ + var f queue.HookRetiredFilter + xparseJSON(ctl, ctl.xread(), &f) + var s queue.HookRetiredSort + xparseJSON(ctl, ctl.xread(), &s) + l, err := queue.HookRetiredList(ctx, f, s) + ctl.xcheck(err, "listing retired webhooks") + ctl.xwriteok() + + xw := ctl.writer() + fmt.Fprintln(xw, "retired webhooks:") + for _, h := range l { + var lastAttempt string + if len(h.Results) > 0 { + lastAttempt = time.Since(h.LastResult().Start).Round(time.Second).String() + } + result := "success" + if !h.Success { + result = "failure" + } + fmt.Fprintf(xw, "%5d %s %s account:%s last %s error %q url %s\n", h.ID, h.Submitted.Format(time.RFC3339), result, h.Account, lastAttempt, h.LastResult().Error, h.URL) + } + if len(l) == 0 { + fmt.Fprint(xw, "(none)\n") + } + xw.xclose() + + case "queuehookretiredprint": + /* protocol: + > "queuehookretiredprint" + > id + < "ok" + < stream + */ + idstr := ctl.xread() + id, err := strconv.ParseInt(idstr, 10, 64) + if err != nil { + ctl.xcheck(err, "parsing id") + } + l, err := queue.HookRetiredList(ctx, queue.HookRetiredFilter{IDs: []int64{id}}, queue.HookRetiredSort{}) + ctl.xcheck(err, "getting retired webhooks") + if len(l) == 0 { + ctl.xcheck(errors.New("not found"), "getting retired webhook") + } + h := l[0] + ctl.xwriteok() + xw := ctl.writer() + enc := json.NewEncoder(xw) + enc.SetIndent("", "\t") + err = enc.Encode(h) + ctl.xcheck(err, "encode retired webhook") + xw.xclose() + + case "queuesuppresslist": + /* protocol: + > "queuesuppresslist" + > account (or empty) + < "ok" or error + < stream + */ + + account := ctl.xread() + l, err := queue.SuppressionList(ctx, account) + ctl.xcheck(err, "listing suppressions") + ctl.xwriteok() + xw := ctl.writer() + fmt.Fprintln(xw, "suppressions (account, address, manual, time added, base adddress, reason):") + for _, sup := range l { + manual := "No" + if sup.Manual { + manual = "Yes" + } + fmt.Fprintf(xw, "%q\t%q\t%s\t%s\t%q\t%q\n", sup.Account, sup.OriginalAddress, manual, sup.Created.Round(time.Second), sup.BaseAddress, sup.Reason) + } + if len(l) == 0 { + fmt.Fprintln(xw, "(none)") + } + xw.xclose() + + case "queuesuppressadd": + /* protocol: + > "queuesuppressadd" + > account + > address + < "ok" or error + */ + + account := ctl.xread() + address := ctl.xread() + _, ok := mox.Conf.Account(account) + if !ok { + ctl.xcheck(errors.New("unknown account"), "looking up account") + } + addr, err := smtp.ParseAddress(address) + ctl.xcheck(err, "parsing address") + sup := webapi.Suppression{ + Account: account, + Manual: true, + Reason: "added through mox cli", + } + err = queue.SuppressionAdd(ctx, addr.Path(), &sup) + ctl.xcheck(err, "adding suppression") + ctl.xwriteok() + + case "queuesuppressremove": + /* protocol: + > "queuesuppressremove" + > account + > address + < "ok" or error + */ + + account := ctl.xread() + address := ctl.xread() + addr, err := smtp.ParseAddress(address) + ctl.xcheck(err, "parsing address") + err = queue.SuppressionRemove(ctx, account, addr.Path()) + ctl.xcheck(err, "removing suppression") + ctl.xwriteok() + + case "queuesuppresslookup": + /* protocol: + > "queuesuppresslookup" + > account or empty + > address + < "ok" or error + < stream + */ + + account := ctl.xread() + address := ctl.xread() + if account != "" { + _, ok := mox.Conf.Account(account) + if !ok { + ctl.xcheck(errors.New("unknown account"), "looking up account") + } + } + addr, err := smtp.ParseAddress(address) + ctl.xcheck(err, "parsing address") + sup, err := queue.SuppressionLookup(ctx, account, addr.Path()) + ctl.xcheck(err, "looking up suppression") + ctl.xwriteok() + xw := ctl.writer() + if sup == nil { + fmt.Fprintln(xw, "not present") + } else { + manual := "no" + if sup.Manual { + manual = "yes" + } + fmt.Fprintf(xw, "present\nadded: %s\nmanual: %s\nbase address: %s\nreason: %q\n", sup.Created.Round(time.Second), manual, sup.BaseAddress, sup.Reason) + } + xw.xclose() + case "importmaildir", "importmbox": mbox := cmd == "importmbox" importctl(ctx, ctl, mbox) diff --git a/ctl_test.go b/ctl_test.go index 3deb3a2..517e653 100644 --- a/ctl_test.go +++ b/ctl_test.go @@ -5,6 +5,7 @@ package main import ( "context" "flag" + "fmt" "net" "os" "path/filepath" @@ -17,6 +18,7 @@ import ( "github.com/mjl-/mox/mox-" "github.com/mjl-/mox/mtastsdb" "github.com/mjl-/mox/queue" + "github.com/mjl-/mox/smtp" "github.com/mjl-/mox/store" "github.com/mjl-/mox/tlsrptdb" ) @@ -43,6 +45,9 @@ func TestCtl(t *testing.T) { } defer store.Switchboard()() + err := queue.Init() + tcheck(t, err, "queue init") + testctl := func(fn func(clientctl *ctl)) { t.Helper() @@ -65,9 +70,6 @@ func TestCtl(t *testing.T) { ctlcmdSetaccountpassword(ctl, "mjl", "test4321") }) - err := queue.Init() - tcheck(t, err, "queue init") - testctl(func(ctl *ctl) { ctlcmdQueueHoldrulesList(ctl) }) @@ -90,6 +92,22 @@ func TestCtl(t *testing.T) { ctlcmdQueueHoldrulesRemove(ctl, 1) }) + // Queue a message to list/change/dump. + msg := "Subject: subject\r\n\r\nbody\r\n" + msgFile, err := store.CreateMessageTemp(pkglog, "queuedump-test") + tcheck(t, err, "temp file") + _, err = msgFile.Write([]byte(msg)) + tcheck(t, err, "write message") + _, err = msgFile.Seek(0, 0) + tcheck(t, err, "rewind message") + defer os.Remove(msgFile.Name()) + defer msgFile.Close() + addr, err := smtp.ParseAddress("mjl@mox.example") + tcheck(t, err, "parse address") + qml := []queue.Msg{queue.MakeMsg(addr.Path(), addr.Path(), false, false, int64(len(msg)), "", nil, nil, time.Now(), "subject")} + queue.Add(ctxbg, pkglog, "mjl", msgFile, qml...) + qmid := qml[0].ID + // Has entries now. testctl(func(ctl *ctl) { ctlcmdQueueHoldrulesList(ctl) @@ -97,13 +115,16 @@ func TestCtl(t *testing.T) { // "queuelist" testctl(func(ctl *ctl) { - ctlcmdQueueList(ctl, queue.Filter{}) + ctlcmdQueueList(ctl, queue.Filter{}, queue.Sort{}) }) // "queueholdset" testctl(func(ctl *ctl) { ctlcmdQueueHoldSet(ctl, queue.Filter{}, true) }) + testctl(func(ctl *ctl) { + ctlcmdQueueHoldSet(ctl, queue.Filter{}, false) + }) // "queueschedule" testctl(func(ctl *ctl) { @@ -120,6 +141,11 @@ func TestCtl(t *testing.T) { ctlcmdQueueRequireTLS(ctl, queue.Filter{}, nil) }) + // "queuedump" + testctl(func(ctl *ctl) { + ctlcmdQueueDump(ctl, fmt.Sprintf("%d", qmid)) + }) + // "queuefail" testctl(func(ctl *ctl) { ctlcmdQueueFail(ctl, queue.Filter{}) @@ -130,7 +156,92 @@ func TestCtl(t *testing.T) { ctlcmdQueueDrop(ctl, queue.Filter{}) }) - // no "queuedump", we don't have a message to dump, and the commands exits without a message. + // "queueholdruleslist" + testctl(func(ctl *ctl) { + ctlcmdQueueHoldrulesList(ctl) + }) + + // "queueholdrulesadd" + testctl(func(ctl *ctl) { + ctlcmdQueueHoldrulesAdd(ctl, "mjl", "", "") + }) + testctl(func(ctl *ctl) { + ctlcmdQueueHoldrulesAdd(ctl, "mjl", "localhost", "") + }) + + // "queueholdrulesremove" + testctl(func(ctl *ctl) { + ctlcmdQueueHoldrulesRemove(ctl, 2) + }) + testctl(func(ctl *ctl) { + ctlcmdQueueHoldrulesList(ctl) + }) + + // "queuesuppresslist" + testctl(func(ctl *ctl) { + ctlcmdQueueSuppressList(ctl, "mjl") + }) + + // "queuesuppressadd" + testctl(func(ctl *ctl) { + ctlcmdQueueSuppressAdd(ctl, "mjl", "base@localhost") + }) + testctl(func(ctl *ctl) { + ctlcmdQueueSuppressAdd(ctl, "mjl", "other@localhost") + }) + + // "queuesuppresslookup" + testctl(func(ctl *ctl) { + ctlcmdQueueSuppressLookup(ctl, "mjl", "base@localhost") + }) + + // "queuesuppressremove" + testctl(func(ctl *ctl) { + ctlcmdQueueSuppressRemove(ctl, "mjl", "base@localhost") + }) + testctl(func(ctl *ctl) { + ctlcmdQueueSuppressList(ctl, "mjl") + }) + + // "queueretiredlist" + testctl(func(ctl *ctl) { + ctlcmdQueueRetiredList(ctl, queue.RetiredFilter{}, queue.RetiredSort{}) + }) + + // "queueretiredprint" + testctl(func(ctl *ctl) { + ctlcmdQueueRetiredPrint(ctl, "1") + }) + + // "queuehooklist" + testctl(func(ctl *ctl) { + ctlcmdQueueHookList(ctl, queue.HookFilter{}, queue.HookSort{}) + }) + + // "queuehookschedule" + testctl(func(ctl *ctl) { + ctlcmdQueueHookSchedule(ctl, queue.HookFilter{}, true, time.Minute) + }) + + // "queuehookprint" + testctl(func(ctl *ctl) { + ctlcmdQueueHookPrint(ctl, "1") + }) + + // "queuehookcancel" + testctl(func(ctl *ctl) { + ctlcmdQueueHookCancel(ctl, queue.HookFilter{}) + }) + + // "queuehookretiredlist" + testctl(func(ctl *ctl) { + ctlcmdQueueHookRetiredList(ctl, queue.HookRetiredFilter{}, queue.HookRetiredSort{}) + }) + + // "queuehookretiredprint" + testctl(func(ctl *ctl) { + ctlcmdQueueHookRetiredPrint(ctl, "1") + }) // "importmbox" testctl(func(ctl *ctl) { diff --git a/develop.txt b/develop.txt index b241f70..6fcbea2 100644 --- a/develop.txt +++ b/develop.txt @@ -307,7 +307,7 @@ done - Check code if there are deprecated features that can be removed. - Generate apidiff and check if breaking changes can be prevented. Update moxtools. - Update features & roadmap in README.md -- Write release notes. +- Write release notes, copy from previous. - Build and run tests with previous major Go release. - Run tests, including with race detector. - Run integration and upgrade tests. @@ -320,7 +320,7 @@ done - Check with https://internet.nl. - Move apidiff/next.txt to apidiff/.txt, and create empty next.txt. - Add release to the Latest release & News sections of website/index.md. -- Create git tag, push code. +- Create git tag (note: "#" is comment, not title/header), push code. - Publish new docker image. - Publish signed release notes for updates.xmox.nl and update DNS record. - Deploy update to website. diff --git a/dmarcdb/eval.go b/dmarcdb/eval.go index e15c450..39b80e2 100644 --- a/dmarcdb/eval.go +++ b/dmarcdb/eval.go @@ -842,7 +842,7 @@ Period: %s - %s UTC continue } - qm := queue.MakeMsg(from.Path(), rcpt.address.Path(), has8bit, smtputf8, msgSize, messageID, []byte(msgPrefix), nil, time.Now()) + qm := queue.MakeMsg(from.Path(), rcpt.address.Path(), has8bit, smtputf8, msgSize, messageID, []byte(msgPrefix), nil, time.Now(), subject) // Don't try as long as regular deliveries, and stop before we would send the // delayed DSN. Though we also won't send that due to IsDMARCReport. qm.MaxAttempts = 5 @@ -911,7 +911,7 @@ func composeAggregateReport(ctx context.Context, log mlog.Log, mf *os.File, from xc.Line() // Textual part, just mentioning this is a DMARC report. - textBody, ct, cte := xc.TextPart(text) + textBody, ct, cte := xc.TextPart("plain", text) textHdr := textproto.MIMEHeader{} textHdr.Set("Content-Type", ct) textHdr.Set("Content-Transfer-Encoding", cte) @@ -997,7 +997,7 @@ Submitting-URI: %s continue } - qm := queue.MakeMsg(fromAddr.Path(), rcpt.Address.Path(), has8bit, smtputf8, msgSize, messageID, []byte(msgPrefix), nil, time.Now()) + qm := queue.MakeMsg(fromAddr.Path(), rcpt.Address.Path(), has8bit, smtputf8, msgSize, messageID, []byte(msgPrefix), nil, time.Now(), subject) // Don't try as long as regular deliveries, and stop before we would send the // delayed DSN. Though we also won't send that due to IsDMARCReport. qm.MaxAttempts = 5 @@ -1045,7 +1045,7 @@ func composeErrorReport(ctx context.Context, log mlog.Log, mf *os.File, fromAddr xc.Header("User-Agent", "mox/"+moxvar.Version) xc.Header("MIME-Version", "1.0") - textBody, ct, cte := xc.TextPart(text) + textBody, ct, cte := xc.TextPart("plain", text) xc.Header("Content-Type", ct) xc.Header("Content-Transfer-Encoding", cte) xc.Line() diff --git a/doc.go b/doc.go index dd77620..c39d6bf 100644 --- a/doc.go +++ b/doc.go @@ -28,15 +28,27 @@ any parameters. Followed by the help and usage information for each command. mox queue holdrules list mox queue holdrules add [ruleflags] mox queue holdrules remove ruleid - mox queue list [filterflags] + mox queue list [filtersortflags] mox queue hold [filterflags] mox queue unhold [filterflags] - mox queue schedule [filterflags] duration + mox queue schedule [filterflags] [-now] duration mox queue transport [filterflags] transport mox queue requiretls [filterflags] {yes | no | default} mox queue fail [filterflags] mox queue drop [filterflags] mox queue dump id + mox queue retired list [filtersortflags] + mox queue retired print id + mox queue suppress list [-account account] + mox queue suppress add account address + mox queue suppress remove account address + mox queue suppress lookup [-account account] address + mox queue webhook list [filtersortflags] + mox queue webhook schedule [filterflags] duration + mox queue webhook cancel [filterflags] + mox queue webhook print id + mox queue webhook retired list [filtersortflags] + mox queue webhook retired print id mox import maildir accountname mailboxname maildir mox import mbox accountname mailboxname mbox mox export maildir dst-dir account-path [mailbox] @@ -59,7 +71,7 @@ any parameters. Followed by the help and usage information for each command. mox config describe-sendmail >/etc/moxsubmit.conf mox config printservice >mox.service mox config ensureacmehostprivatekeys - mox example [name] + mox config example [name] mox checkupdate mox cid cid mox clientconfig domain @@ -88,6 +100,8 @@ any parameters. Followed by the help and usage information for each command. mox tlsrpt lookup domain mox tlsrpt parsereportmsg message ... mox version + mox webapi [method [baseurl-with-credentials] + mox example [name] mox bumpuidvalidity account [mailbox] mox reassignuids account [mailboxid] mox fixuidmeta account @@ -143,8 +157,8 @@ domains with HTTP/HTTPS, including with automatic TLS with ACME, is easily configured through both configuration files and admin web interface, and can act as a reverse proxy (and static file server for that matter), so you can forward traffic to your existing backend applications. Look for "WebHandlers:" in the -output of "mox config describe-domains" and see the output of "mox example -webhandlers". +output of "mox config describe-domains" and see the output of +"mox config example webhandlers". usage: mox quickstart [-skipdial] [-existing-webserver] [-hostname host] user@domain [user | uid] -existing-webserver @@ -244,17 +258,23 @@ List matching messages in the delivery queue. Prints the message with its ID, last and next delivery attempts, last error. - usage: mox queue list [filterflags] + usage: mox queue list [filtersortflags] -account string account that queued the message + -asc + sort ascending instead of descending (default) -from string from address of message, use "@example.com" to match all messages for a domain -hold value true or false, whether to match only messages that are (not) on hold -ids value comma-separated list of message IDs + -n int + number of messages to return -nextattempt string filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) + -sort value + field to sort by, "nextattempt" (default) or "queued" -submitted string filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now) -to string @@ -278,6 +298,8 @@ otherwise handled by the admin. true or false, whether to match only messages that are (not) on hold -ids value comma-separated list of message IDs + -n int + number of messages to return -nextattempt string filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) -submitted string @@ -303,6 +325,8 @@ delivery attempt. See the "queue schedule" command. true or false, whether to match only messages that are (not) on hold -ids value comma-separated list of message IDs + -n int + number of messages to return -nextattempt string filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) -submitted string @@ -322,7 +346,7 @@ current time, instead of added to the current scheduled time. Schedule immediate delivery with "mox queue schedule -now 0". - usage: mox queue schedule [filterflags] duration + usage: mox queue schedule [filterflags] [-now] duration -account string account that queued the message -from string @@ -331,6 +355,8 @@ Schedule immediate delivery with "mox queue schedule -now 0". true or false, whether to match only messages that are (not) on hold -ids value comma-separated list of message IDs + -n int + number of messages to return -nextattempt string filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) -now @@ -360,6 +386,8 @@ another mail server or with connections over a SOCKS proxy. true or false, whether to match only messages that are (not) on hold -ids value comma-separated list of message IDs + -n int + number of messages to return -nextattempt string filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) -submitted string @@ -392,6 +420,8 @@ TLS. true or false, whether to match only messages that are (not) on hold -ids value comma-separated list of message IDs + -n int + number of messages to return -nextattempt string filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) -submitted string @@ -418,6 +448,8 @@ contains a line saying the message was canceled by the admin. true or false, whether to match only messages that are (not) on hold -ids value comma-separated list of message IDs + -n int + number of messages to return -nextattempt string filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) -submitted string @@ -443,6 +475,8 @@ the message, use "queue dump" before removing. true or false, whether to match only messages that are (not) on hold -ids value comma-separated list of message IDs + -n int + number of messages to return -nextattempt string filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) -submitted string @@ -460,6 +494,180 @@ The message is printed to stdout and is in standard internet mail format. usage: mox queue dump id +# mox queue retired list + +List matching messages in the retired queue. + +Prints messages with their ID and results. + + usage: mox queue retired list [filtersortflags] + -account string + account that queued the message + -asc + sort ascending instead of descending (default) + -from string + from address of message, use "@example.com" to match all messages for a domain + -ids value + comma-separated list of retired message IDs + -lastactivity string + filter by time of last activity relative to now, value must start with "<" (before now) or ">" (after now) + -n int + number of messages to return + -result value + "success" or "failure" as result of delivery + -sort value + field to sort by, "lastactivity" (default) or "queued" + -submitted string + filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now) + -to string + recipient address of message, use "@example.com" to match all messages for a domain + -transport value + transport to use for messages, empty string sets the default behaviour + +# mox queue retired print + +Print a message from the retired queue. + +Prints a JSON representation of the information from the retired queue. + + usage: mox queue retired print id + +# mox queue suppress list + +Print addresses in suppression list. + + usage: mox queue suppress list [-account account] + -account string + only show suppression list for this account + +# mox queue suppress add + +Add address to suppression list for account. + + usage: mox queue suppress add account address + +# mox queue suppress remove + +Remove address from suppression list for account. + + usage: mox queue suppress remove account address + +# mox queue suppress lookup + +Check if address is present in suppression list, for any or specific account. + + usage: mox queue suppress lookup [-account account] address + -account string + only check address in specified account + +# mox queue webhook list + +List matching webhooks in the queue. + +Prints list of webhooks, their IDs and basic information. + + usage: mox queue webhook list [filtersortflags] + -account string + account that queued the message/webhook + -asc + sort ascending instead of descending (default) + -event value + event this webhook is about: incoming, delivered, suppressed, delayed, failed, relayed, expanded, canceled, unrecognized + -ids value + comma-separated list of webhook IDs + -n int + number of webhooks to return + -nextattempt string + filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) + -sort value + field to sort by, "nextattempt" (default) or "queued" + -submitted string + filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now) + +# mox queue webhook schedule + +Change next delivery attempt for matching webhooks. + +The next delivery attempt is adjusted by the duration parameter. If the -now +flag is set, the new delivery attempt is set to the duration added to the +current time, instead of added to the current scheduled time. + +Schedule immediate delivery with "mox queue schedule -now 0". + + usage: mox queue webhook schedule [filterflags] duration + -account string + account that queued the message/webhook + -event value + event this webhook is about: incoming, delivered, suppressed, delayed, failed, relayed, expanded, canceled, unrecognized + -ids value + comma-separated list of webhook IDs + -n int + number of webhooks to return + -nextattempt string + filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) + -now + schedule for duration relative to current time instead of relative to current next delivery attempt for webhooks + -submitted string + filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now) + +# mox queue webhook cancel + +Fail delivery of matching webhooks. + + usage: mox queue webhook cancel [filterflags] + -account string + account that queued the message/webhook + -event value + event this webhook is about: incoming, delivered, suppressed, delayed, failed, relayed, expanded, canceled, unrecognized + -ids value + comma-separated list of webhook IDs + -n int + number of webhooks to return + -nextattempt string + filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now) + -submitted string + filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now) + +# mox queue webhook print + +Print details of a webhook from the queue. + +The webhook is printed to stdout as JSON. + + usage: mox queue webhook print id + +# mox queue webhook retired list + +List matching webhooks in the retired queue. + +Prints list of retired webhooks, their IDs and basic information. + + usage: mox queue webhook retired list [filtersortflags] + -account string + account that queued the message/webhook + -asc + sort ascending instead of descending (default) + -event value + event this webhook is about: incoming, delivered, suppressed, delayed, failed, relayed, expanded, canceled, unrecognized + -ids value + comma-separated list of retired webhook IDs + -lastactivity string + filter by time of last activity relative to now, value must start with "<" (before now) or ">" (after now) + -n int + number of webhooks to return + -sort value + field to sort by, "lastactivity" (default) or "queued" + -submitted string + filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now) + +# mox queue webhook retired print + +Print details of a webhook from the retired queue. + +The retired webhook is printed to stdout as JSON. + + usage: mox queue webhook retired print id + # mox import maildir Import a maildir into an account. @@ -552,8 +760,9 @@ automatically initialized with configuration files, an account with email address mox@localhost and password moxmoxmox, and a newly generated self-signed TLS certificate. -All incoming email to any address is accepted (if checks pass), unless the -recipient localpart ends with: +All incoming email to any address is accepted (if checks pass) and delivered to +the account that is submitting the message, unless the recipient localpart ends +with: - "temperror": fail with a temporary error code - "permerror": fail with a permanent error code @@ -561,7 +770,8 @@ recipient localpart ends with: - "timeout": no response (for an hour) If the localpart begins with "mailfrom" or "rcptto", the error is returned -during those commands instead of during "data". +during those commands instead of during "data". If the localpart beings with +"queue", the submission is accepted but delivery from the queue will fail. usage: mox localserve -dir string @@ -793,11 +1003,11 @@ for a domain and create the TLSA DNS records it suggests to enable DANE. usage: mox config ensureacmehostprivatekeys -# mox example +# mox config example -List available examples, or print a specific example. +List available config examples, or print a specific example. - usage: mox example [name] + usage: mox config example [name] # mox checkupdate @@ -1128,6 +1338,18 @@ Prints this mox version. usage: mox version +# mox webapi + +Lists available methods, prints request/response parameters for method, or calls a method with a request read from standard input. + + usage: mox webapi [method [baseurl-with-credentials] + +# mox example + +List available examples, or print a specific example. + + usage: mox example [name] + # mox bumpuidvalidity Change the IMAP UID validity of the mailbox, causing IMAP clients to refetch messages. @@ -1212,6 +1434,8 @@ and print them. Parse message, print JSON representation. usage: mox message parse message.eml + -smtputf8 + check if message needs smtputf8 # mox reassignthreads diff --git a/dsn/dsn.go b/dsn/dsn.go index 25d859e..13cde4a 100644 --- a/dsn/dsn.go +++ b/dsn/dsn.go @@ -114,9 +114,9 @@ type Recipient struct { // deliveries. RemoteMTA NameIP - // DiagnosticCode should either be empty, or start with "smtp; " followed by the - // literal full SMTP response lines, space separated. - DiagnosticCode string + // DiagnosticCodeSMTP are the full SMTP response lines, space separated. The marshaled + // form starts with "smtp; ", this value does not. + DiagnosticCodeSMTP string LastAttemptDate time.Time FinalLogID string @@ -286,9 +286,9 @@ func (m *Message) Compose(log mlog.Log, smtputf8 bool) ([]byte, error) { status("Remote-MTA", s) } // Presence of Diagnostic-Code indicates the code is from Remote-MTA. ../rfc/3464:1053 - if r.DiagnosticCode != "" { + if r.DiagnosticCodeSMTP != "" { // ../rfc/3461:1342 ../rfc/6533:589 - status("Diagnostic-Code", r.DiagnosticCode) + status("Diagnostic-Code", "smtp; "+r.DiagnosticCodeSMTP) } if !r.LastAttemptDate.IsZero() { status("Last-Attempt-Date", r.LastAttemptDate.Format(message.RFC5322Z)) // ../rfc/3464:1076 diff --git a/dsn/parse.go b/dsn/parse.go index edbc4ab..0cd29be 100644 --- a/dsn/parse.go +++ b/dsn/parse.go @@ -249,7 +249,7 @@ func parseRecipientHeader(mr *textproto.Reader, utf8 bool) (Recipient, error) { } else if len(t) != 2 { err = fmt.Errorf("missing semicolon to separate diagnostic-type from code") } else { - r.DiagnosticCode = strings.TrimSpace(t[1]) + r.DiagnosticCodeSMTP = strings.TrimSpace(t[1]) } case "Last-Attempt-Date": r.LastAttemptDate, err = parseDateTime(v) diff --git a/examples.go b/examples.go index f7eb8b9..0a3e7b4 100644 --- a/examples.go +++ b/examples.go @@ -1,13 +1,21 @@ package main import ( + "bytes" + "encoding/base64" + "encoding/json" "fmt" "log" + "reflect" "strings" + "time" "github.com/mjl-/sconf" "github.com/mjl-/mox/config" + "github.com/mjl-/mox/mox-" + "github.com/mjl-/mox/smtp" + "github.com/mjl-/mox/webhook" ) func cmdExample(c *cmd) { @@ -36,7 +44,33 @@ func cmdExample(c *cmd) { fmt.Print(match()) } -var examples = []struct { +func cmdConfigExample(c *cmd) { + c.params = "[name]" + c.help = `List available config examples, or print a specific example.` + + args := c.Parse() + if len(args) > 1 { + c.Usage() + } + + var match func() string + for _, ex := range configExamples { + if len(args) == 0 { + fmt.Println(ex.Name) + } else if args[0] == ex.Name { + match = ex.Get + } + } + if len(args) == 0 { + return + } + if match == nil { + log.Fatalln("not found") + } + fmt.Print(match()) +} + +var configExamples = []struct { Name string Get func() string }{ @@ -195,3 +229,97 @@ Routes: }, }, } + +var exampleTime = time.Date(2024, time.March, 27, 0, 0, 0, 0, time.UTC) + +var examples = []struct { + Name string + Get func() string +}{ + { + "webhook-outgoing-delivered", + func() string { + v := webhook.Outgoing{ + Version: 0, + Event: webhook.EventDelivered, + QueueMsgID: 101, + FromID: base64.RawURLEncoding.EncodeToString([]byte("0123456789abcdef")), + MessageID: "", + Subject: "subject of original message", + WebhookQueued: exampleTime, + Extra: map[string]string{}, + SMTPCode: smtp.C250Completed, + } + return "Example webhook HTTP POST JSON body for successful outgoing delivery:\n\n\t" + formatJSON(v) + }, + }, + { + "webhook-outgoing-dsn-failed", + func() string { + v := webhook.Outgoing{ + Version: 0, + Event: webhook.EventFailed, + DSN: true, + Suppressing: true, + QueueMsgID: 102, + FromID: base64.RawURLEncoding.EncodeToString([]byte("0123456789abcdef")), + MessageID: "", + Subject: "subject of original message", + WebhookQueued: exampleTime, + Extra: map[string]string{"userid": "456"}, + Error: "timeout connecting to host", + SMTPCode: smtp.C554TransactionFailed, + SMTPEnhancedCode: "5." + smtp.SeNet4Other0, + } + return `Example webhook HTTP POST JSON body for failed delivery based on incoming DSN +message, with custom extra data fields (from original submission), and adding address to the suppression list: + + ` + formatJSON(v) + }, + }, + { + "webhook-incoming-basic", + func() string { + v := webhook.Incoming{ + Version: 0, + From: []webhook.NameAddress{{Address: "mox@localhost"}}, + To: []webhook.NameAddress{{Address: "mjl@localhost"}}, + Subject: "hi", + MessageID: "", + Date: &exampleTime, + Text: "hello world ☺\n", + Structure: webhook.Structure{ + ContentType: "text/plain", + ContentTypeParams: map[string]string{"charset": "utf-8"}, + DecodedSize: int64(len("hello world ☺\r\n")), + Parts: []webhook.Structure{}, + }, + Meta: webhook.IncomingMeta{ + MsgID: 201, + MailFrom: "mox@localhost", + MailFromValidated: false, + MsgFromValidated: true, + RcptTo: "mjl@localhost", + DKIMVerifiedDomains: []string{"localhost"}, + RemoteIP: "127.0.0.1", + Received: exampleTime.Add(3 * time.Second), + MailboxName: "Inbox", + Automated: false, + }, + } + return "Example JSON body for webhooks for incoming delivery of basic message:\n\n\t" + formatJSON(v) + }, + }, +} + +func formatJSON(v any) string { + nv, _ := mox.FillNil(reflect.ValueOf(v)) + v = nv.Interface() + var b bytes.Buffer + enc := json.NewEncoder(&b) + enc.SetIndent("\t", "\t") + enc.SetEscapeHTML(false) + err := enc.Encode(v) + xcheckf(err, "encoding to json") + return b.String() +} diff --git a/gendoc.sh b/gendoc.sh index b7b2a78..833fe86 100755 --- a/gendoc.sh +++ b/gendoc.sh @@ -1,5 +1,6 @@ #!/usr/bin/env sh +# ./doc.go ( cat <doc.go gofmt -w doc.go +# ./config/doc.go ( cat <". Below are all examples included in mox. +examples with "mox config example", and print a specific example with "mox +config example ". Below are all examples included in mox. EOF -for ex in $(./mox example); do +for ex in $(./mox config example); do echo '# Example '$ex echo - ./mox example $ex | sed 's/^/\t/' + ./mox config example $ex | sed 's/^/\t/' echo done @@ -112,3 +114,7 @@ package config EOF )>config/doc.go gofmt -w config/doc.go + +# ./webapi/doc.go +./webapi/gendoc.sh >webapi/doc.go +gofmt -w webapi/doc.go diff --git a/gentestdata.go b/gentestdata.go index 5292a89..4671196 100644 --- a/gentestdata.go +++ b/gentestdata.go @@ -233,7 +233,7 @@ Accounts: const qmsg = "From: \r\nTo: \r\nSubject: test\r\n\r\nthe message...\r\n" _, err = fmt.Fprint(mf, qmsg) xcheckf(err, "writing message") - qm := queue.MakeMsg(mailfrom, rcptto, false, false, int64(len(qmsg)), "", prefix, nil, time.Now()) + qm := queue.MakeMsg(mailfrom, rcptto, false, false, int64(len(qmsg)), "", prefix, nil, time.Now(), "test") err = queue.Add(ctxbg, c.log, "test0", mf, qm) xcheckf(err, "enqueue message") diff --git a/http/web.go b/http/web.go index e2722cf..eb880b8 100644 --- a/http/web.go +++ b/http/web.go @@ -35,6 +35,7 @@ import ( "github.com/mjl-/mox/ratelimit" "github.com/mjl-/mox/webaccount" "github.com/mjl-/mox/webadmin" + "github.com/mjl-/mox/webapisrv" "github.com/mjl-/mox/webmail" ) @@ -577,6 +578,30 @@ func Listen() { if maxMsgSize == 0 { maxMsgSize = config.DefaultMaxMsgSize } + + if l.WebAPIHTTP.Enabled { + port := config.Port(l.WebAPIHTTP.Port, 80) + path := "/webapi/" + if l.WebAPIHTTP.Path != "" { + path = l.WebAPIHTTP.Path + } + srv := ensureServe(false, port, "webapi-http at "+path) + handler := safeHeaders(http.StripPrefix(path[:len(path)-1], webapisrv.NewServer(maxMsgSize, path, l.WebAPIHTTP.Forwarded))) + srv.Handle("webapi", nil, path, handler) + redirectToTrailingSlash(srv, "webapi", path) + } + if l.WebAPIHTTPS.Enabled { + port := config.Port(l.WebAPIHTTPS.Port, 443) + path := "/webapi/" + if l.WebAPIHTTPS.Path != "" { + path = l.WebAPIHTTPS.Path + } + srv := ensureServe(true, port, "webapi-https at "+path) + handler := safeHeaders(http.StripPrefix(path[:len(path)-1], webapisrv.NewServer(maxMsgSize, path, l.WebAPIHTTPS.Forwarded))) + srv.Handle("webapi", nil, path, handler) + redirectToTrailingSlash(srv, "webapi", path) + } + if l.WebmailHTTP.Enabled { port := config.Port(l.WebmailHTTP.Port, 80) path := "/webmail/" diff --git a/lib.ts b/lib.ts index 376fddc..b371a9f 100644 --- a/lib.ts +++ b/lib.ts @@ -216,6 +216,7 @@ const attr = { autocomplete: (s: string) => _attr('autocomplete', s), list: (s: string) => _attr('list', s), form: (s: string) => _attr('form', s), + size: (s: string) => _attr('size', s), } const style = (x: {[k: string]: string | number}) => { return {_styles: x}} const prop = (x: {[k: string]: any}) => { return {_props: x}} diff --git a/localserve.go b/localserve.go index b206367..2a4ddfe 100644 --- a/localserve.go +++ b/localserve.go @@ -49,8 +49,9 @@ automatically initialized with configuration files, an account with email address mox@localhost and password moxmoxmox, and a newly generated self-signed TLS certificate. -All incoming email to any address is accepted (if checks pass), unless the -recipient localpart ends with: +All incoming email to any address is accepted (if checks pass) and delivered to +the account that is submitting the message, unless the recipient localpart ends +with: - "temperror": fail with a temporary error code - "permerror": fail with a permanent error code @@ -58,7 +59,8 @@ recipient localpart ends with: - "timeout": no response (for an hour) If the localpart begins with "mailfrom" or "rcptto", the error is returned -during those commands instead of during "data". +during those commands instead of during "data". If the localpart beings with +"queue", the submission is accepted but delivery from the queue will fail. ` golog.SetFlags(0) @@ -163,7 +165,9 @@ during those commands instead of during "data". golog.Printf(`- [45][0-9][0-9]: fail with the specific error code.`) golog.Printf(`- "timeout": no response (for an hour).`) golog.Print("") - golog.Printf(`if the localpart begins with "mailfrom" or "rcptto", the error is returned during those commands instead of during "data"`) + golog.Print(`if the localpart begins with "mailfrom" or "rcptto", the error is returned`) + golog.Print(`during those commands instead of during "data". if the localpart beings with`) + golog.Print(`"queue", the submission is accepted but delivery from the queue will fail.`) golog.Print("") golog.Print(" smtp://localhost:1025 - receive email") golog.Print("smtps://mox%40localhost:moxmoxmox@localhost:1465 - send email") @@ -174,6 +178,8 @@ during those commands instead of during "data". golog.Print(" http://localhost:1080/account/ - account http (without tls)") golog.Print("https://localhost:1443/webmail/ - webmail https (email mox@localhost, password moxmoxmox)") golog.Print(" http://localhost:1080/webmail/ - webmail http (without tls)") + golog.Print("https://localhost:1443/webapi/ - webmail https (email mox@localhost, password moxmoxmox)") + golog.Print(" http://localhost:1080/webapi/ - webmail http (without tls)") golog.Print("https://localhost:1443/admin/ - admin https (password moxadmin)") golog.Print(" http://localhost:1080/admin/ - admin http (without tls)") golog.Print("") @@ -332,6 +338,12 @@ func writeLocalConfig(log mlog.Log, dir, ip string) (rerr error) { local.WebmailHTTPS.Enabled = true local.WebmailHTTPS.Port = 1443 local.WebmailHTTPS.Path = "/webmail/" + local.WebAPIHTTP.Enabled = true + local.WebAPIHTTP.Port = 1080 + local.WebAPIHTTP.Path = "/webapi/" + local.WebAPIHTTPS.Enabled = true + local.WebAPIHTTPS.Port = 1443 + local.WebAPIHTTPS.Path = "/webapi/" local.AdminHTTP.Enabled = true local.AdminHTTP.Port = 1080 local.AdminHTTPS.Enabled = true @@ -375,7 +387,9 @@ func writeLocalConfig(log mlog.Log, dir, ip string) (rerr error) { // Write domains.conf. acc := config.Account{ - RejectsMailbox: "Rejects", + KeepRetiredMessagePeriod: 72 * time.Hour, + KeepRetiredWebhookPeriod: 72 * time.Hour, + RejectsMailbox: "Rejects", Destinations: map[string]config.Destination{ "mox@localhost": {}, }, diff --git a/main.go b/main.go index 63a8bb2..4f0324e 100644 --- a/main.go +++ b/main.go @@ -1,6 +1,7 @@ package main import ( + "bufio" "bytes" "context" "crypto" @@ -23,9 +24,11 @@ import ( "log" "log/slog" "net" + "net/http" "net/url" "os" "path/filepath" + "reflect" "runtime" "slices" "strconv" @@ -57,6 +60,7 @@ import ( "github.com/mjl-/mox/moxvar" "github.com/mjl-/mox/mtasts" "github.com/mjl-/mox/publicsuffix" + "github.com/mjl-/mox/queue" "github.com/mjl-/mox/smtp" "github.com/mjl-/mox/smtpclient" "github.com/mjl-/mox/spf" @@ -65,6 +69,7 @@ import ( "github.com/mjl-/mox/tlsrptdb" "github.com/mjl-/mox/updates" "github.com/mjl-/mox/webadmin" + "github.com/mjl-/mox/webapi" ) var ( @@ -111,6 +116,18 @@ var commands = []struct { {"queue fail", cmdQueueFail}, {"queue drop", cmdQueueDrop}, {"queue dump", cmdQueueDump}, + {"queue retired list", cmdQueueRetiredList}, + {"queue retired print", cmdQueueRetiredPrint}, + {"queue suppress list", cmdQueueSuppressList}, + {"queue suppress add", cmdQueueSuppressAdd}, + {"queue suppress remove", cmdQueueSuppressRemove}, + {"queue suppress lookup", cmdQueueSuppressLookup}, + {"queue webhook list", cmdQueueHookList}, + {"queue webhook schedule", cmdQueueHookSchedule}, + {"queue webhook cancel", cmdQueueHookCancel}, + {"queue webhook print", cmdQueueHookPrint}, + {"queue webhook retired list", cmdQueueHookRetiredList}, + {"queue webhook retired print", cmdQueueHookRetiredPrint}, {"import maildir", cmdImportMaildir}, {"import mbox", cmdImportMbox}, {"export maildir", cmdExportMaildir}, @@ -134,7 +151,7 @@ var commands = []struct { {"config describe-sendmail", cmdConfigDescribeSendmail}, {"config printservice", cmdConfigPrintservice}, {"config ensureacmehostprivatekeys", cmdConfigEnsureACMEHostprivatekeys}, - {"example", cmdExample}, + {"config example", cmdConfigExample}, {"checkupdate", cmdCheckupdate}, {"cid", cmdCid}, @@ -166,7 +183,9 @@ var commands = []struct { {"tlsrpt lookup", cmdTLSRPTLookup}, {"tlsrpt parsereportmsg", cmdTLSRPTParsereportmsg}, {"version", cmdVersion}, + {"webapi", cmdWebapi}, + {"example", cmdExample}, {"bumpuidvalidity", cmdBumpUIDValidity}, {"reassignuids", cmdReassignUIDs}, {"fixuidmeta", cmdFixUIDMeta}, @@ -196,6 +215,7 @@ var commands = []struct { {"ximport mbox", cmdXImportMbox}, {"openaccounts", cmdOpenaccounts}, {"readmessages", cmdReadmessages}, + {"queuefillretired", cmdQueueFillRetired}, } var cmds []cmd @@ -2384,6 +2404,7 @@ The report is printed in formatted JSON. // todo future: only print the highlights? enc := json.NewEncoder(os.Stdout) enc.SetIndent("", "\t") + enc.SetEscapeHTML(false) err = enc.Encode(reportJSON) xcheckf(err, "write report") } @@ -2661,6 +2682,97 @@ func cmdVersion(c *cmd) { fmt.Printf("%s %s/%s\n", runtime.Version(), runtime.GOOS, runtime.GOARCH) } +func cmdWebapi(c *cmd) { + c.params = "[method [baseurl-with-credentials]" + c.help = "Lists available methods, prints request/response parameters for method, or calls a method with a request read from standard input." + args := c.Parse() + if len(args) > 2 { + c.Usage() + } + + t := reflect.TypeOf((*webapi.Methods)(nil)).Elem() + methods := map[string]reflect.Type{} + var ml []string + for i := 0; i < t.NumMethod(); i++ { + mt := t.Method(i) + methods[mt.Name] = mt.Type + ml = append(ml, mt.Name) + } + + if len(args) == 0 { + fmt.Println(strings.Join(ml, "\n")) + return + } + + mt, ok := methods[args[0]] + if !ok { + log.Fatalf("unknown method %q", args[0]) + } + resultNotJSON := mt.Out(0).Kind() == reflect.Interface + + if len(args) == 1 { + fmt.Println("# Example request") + fmt.Println() + printJSON("\t", mox.FillExample(nil, reflect.New(mt.In(1))).Interface()) + fmt.Println() + if resultNotJSON { + fmt.Println("Output is non-JSON data.") + return + } + fmt.Println("# Example response") + fmt.Println() + printJSON("\t", mox.FillExample(nil, reflect.New(mt.Out(0))).Interface()) + return + } + + var response any + if !resultNotJSON { + response = reflect.New(mt.Out(0)) + } + + fmt.Fprintln(os.Stderr, "reading request from stdin...") + request, err := io.ReadAll(os.Stdin) + xcheckf(err, "read message") + + dec := json.NewDecoder(bytes.NewReader(request)) + dec.DisallowUnknownFields() + err = dec.Decode(reflect.New(mt.In(1)).Interface()) + xcheckf(err, "parsing request") + + resp, err := http.PostForm(args[1]+args[0], url.Values{"request": []string{string(request)}}) + xcheckf(err, "http post") + defer resp.Body.Close() + if resp.StatusCode == http.StatusBadRequest { + buf, err := io.ReadAll(&moxio.LimitReader{R: resp.Body, Limit: 10 * 1024}) + xcheckf(err, "reading response for 400 bad request error") + err = json.Unmarshal(buf, &response) + if err == nil { + printJSON("", response) + } else { + fmt.Fprintf(os.Stderr, "(not json)\n") + os.Stderr.Write(buf) + } + os.Exit(1) + } else if resp.StatusCode != http.StatusOK { + fmt.Fprintf(os.Stderr, "http response %s\n", resp.Status) + _, err := io.Copy(os.Stderr, resp.Body) + xcheckf(err, "copy body") + } else { + err := json.NewDecoder(resp.Body).Decode(&resp) + xcheckf(err, "unmarshal response") + printJSON("", response) + } +} + +func printJSON(indent string, v any) { + fmt.Printf("%s", indent) + enc := json.NewEncoder(os.Stdout) + enc.SetIndent(indent, "\t") + enc.SetEscapeHTML(false) + err := enc.Encode(v) + xcheckf(err, "encode json") +} + // todo: should make it possible to run this command against a running mox. it should disconnect existing clients for accounts with a bumped uidvalidity, so they will reconnect and refetch the data. func cmdBumpUIDValidity(c *cmd) { c.params = "account [mailbox]" @@ -3020,6 +3132,8 @@ func cmdMessageParse(c *cmd) { c.params = "message.eml" c.help = "Parse message, print JSON representation." + var smtputf8 bool + c.flag.BoolVar(&smtputf8, "smtputf8", false, "check if message needs smtputf8") args := c.Parse() if len(args) != 1 { c.Usage() @@ -3035,8 +3149,40 @@ func cmdMessageParse(c *cmd) { xcheckf(err, "parsing nested parts") enc := json.NewEncoder(os.Stdout) enc.SetIndent("", "\t") + enc.SetEscapeHTML(false) err = enc.Encode(part) xcheckf(err, "write") + + hasNonASCII := func(r io.Reader) bool { + br := bufio.NewReader(r) + for { + b, err := br.ReadByte() + if err == io.EOF { + break + } + xcheckf(err, "read header") + if b > 0x7f { + return true + } + } + return false + } + + var walk func(p *message.Part) bool + walk = func(p *message.Part) bool { + if hasNonASCII(p.HeaderReader()) { + return true + } + for _, pp := range p.Parts { + if walk(&pp) { + return true + } + } + return false + } + if smtputf8 { + fmt.Println("message needs smtputf8:", walk(&part)) + } } func cmdOpenaccounts(c *cmd) { @@ -3240,3 +3386,134 @@ Opens database files directly, not going through a running mox instance. log.Printf("account %s, total time %s", accName, time.Since(t0)) } } + +func cmdQueueFillRetired(c *cmd) { + c.unlisted = true + c.help = `Fill retired messag and webhooks queue with testdata. + +For testing the pagination. Operates directly on queue database. +` + var n int + c.flag.IntVar(&n, "n", 10000, "retired messages and retired webhooks to insert") + args := c.Parse() + if len(args) != 0 { + c.Usage() + } + + mustLoadConfig() + err := queue.Init() + xcheckf(err, "init queue") + err = queue.DB.Write(context.Background(), func(tx *bstore.Tx) error { + now := time.Now() + + // Cause autoincrement ID for queue.Msg to be forwarded, and use the reserved ID + // space for inserting retired messages. + fm := queue.Msg{} + err = tx.Insert(&fm) + xcheckf(err, "temporarily insert message to get autoincrement sequence") + err = tx.Delete(&fm) + xcheckf(err, "removing temporary message for resetting autoincrement sequence") + fm.ID += int64(n) + err = tx.Insert(&fm) + xcheckf(err, "temporarily insert message to forward autoincrement sequence") + err = tx.Delete(&fm) + xcheckf(err, "removing temporary message after forwarding autoincrement sequence") + fm.ID -= int64(n) + + // And likewise for webhooks. + fh := queue.Hook{Account: "x", URL: "x", NextAttempt: time.Now()} + err = tx.Insert(&fh) + xcheckf(err, "temporarily insert webhook to get autoincrement sequence") + err = tx.Delete(&fh) + xcheckf(err, "removing temporary webhook for resetting autoincrement sequence") + fh.ID += int64(n) + err = tx.Insert(&fh) + xcheckf(err, "temporarily insert webhook to forward autoincrement sequence") + err = tx.Delete(&fh) + xcheckf(err, "removing temporary webhook after forwarding autoincrement sequence") + fh.ID -= int64(n) + + for i := 0; i < n; i++ { + t0 := now.Add(-time.Duration(i) * time.Second) + last := now.Add(-time.Duration(i/10) * time.Second) + mr := queue.MsgRetired{ + ID: fm.ID + int64(i), + Queued: t0, + SenderAccount: "test", + SenderLocalpart: "mox", + SenderDomainStr: "localhost", + FromID: fmt.Sprintf("%016d", i), + RecipientLocalpart: "mox", + RecipientDomain: dns.IPDomain{Domain: dns.Domain{ASCII: "localhost"}}, + RecipientDomainStr: "localhost", + Attempts: i % 6, + LastAttempt: &last, + Results: []queue.MsgResult{ + { + Start: last, + Duration: time.Millisecond, + Success: i%10 != 0, + Code: 250, + }, + }, + Has8bit: i%2 == 0, + SMTPUTF8: i%8 == 0, + Size: int64(i * 100), + MessageID: fmt.Sprintf("", i), + Subject: fmt.Sprintf("test message %d", i), + Extra: map[string]string{"i": fmt.Sprintf("%d", i)}, + LastActivity: last, + RecipientAddress: "mox@localhost", + Success: i%10 != 0, + KeepUntil: now.Add(48 * time.Hour), + } + err := tx.Insert(&mr) + xcheckf(err, "inserting retired message") + } + + for i := 0; i < n; i++ { + t0 := now.Add(-time.Duration(i) * time.Second) + last := now.Add(-time.Duration(i/10) * time.Second) + var event string + if i%10 != 0 { + event = "delivered" + } + hr := queue.HookRetired{ + ID: fh.ID + int64(i), + QueueMsgID: fm.ID + int64(i), + FromID: fmt.Sprintf("%016d", i), + MessageID: fmt.Sprintf("", i), + Subject: fmt.Sprintf("test message %d", i), + Extra: map[string]string{"i": fmt.Sprintf("%d", i)}, + Account: "test", + URL: "http://localhost/hook", + IsIncoming: i%10 == 0, + OutgoingEvent: event, + Payload: "{}", + + Submitted: t0, + Attempts: i % 6, + Results: []queue.HookResult{ + { + Start: t0, + Duration: time.Millisecond, + URL: "http://localhost/hook", + Success: i%10 != 0, + Code: 200, + Response: "ok", + }, + }, + + Success: i%10 != 0, + LastActivity: last, + KeepUntil: now.Add(48 * time.Hour), + } + err := tx.Insert(&hr) + xcheckf(err, "inserting retired hook") + } + + return nil + }) + xcheckf(err, "add to queue") + log.Printf("added %d retired messages and %d retired webhooks", n, n) +} diff --git a/message/compose.go b/message/compose.go index 4289fee..fafab10 100644 --- a/message/compose.go +++ b/message/compose.go @@ -141,7 +141,7 @@ func (c *Composer) Line() { // with newlines (lf), which are replaced with crlf. The returned text may be // quotedprintable, if needed. The returned ct and cte headers are for use with // Content-Type and Content-Transfer-Encoding headers. -func (c *Composer) TextPart(text string) (textBody []byte, ct, cte string) { +func (c *Composer) TextPart(subtype, text string) (textBody []byte, ct, cte string) { if !strings.HasSuffix(text, "\n") { text += "\n" } @@ -162,7 +162,7 @@ func (c *Composer) TextPart(text string) (textBody []byte, ct, cte string) { cte = "7bit" } - ct = mime.FormatMediaType("text/plain", map[string]string{"charset": charset}) + ct = mime.FormatMediaType("text/"+subtype, map[string]string{"charset": charset}) return []byte(text), ct, cte } diff --git a/message/examples_test.go b/message/examples_test.go index 4f31bf8..b98daea 100644 --- a/message/examples_test.go +++ b/message/examples_test.go @@ -120,7 +120,7 @@ func ExampleComposer() { xc.Header("MIME-Version", "1.0") // Write content-* headers for the text body. - body, ct, cte := xc.TextPart("this is the body") + body, ct, cte := xc.TextPart("plain", "this is the body") xc.Header("Content-Type", ct) xc.Header("Content-Transfer-Encoding", cte) diff --git a/message/part.go b/message/part.go index 7a3a108..42b2a04 100644 --- a/message/part.go +++ b/message/part.go @@ -97,8 +97,8 @@ type Envelope struct { To []Address CC []Address BCC []Address - InReplyTo string - MessageID string + InReplyTo string // From In-Reply-To header, includes <>. + MessageID string // From Message-Id header, includes <>. } // Address as used in From and To headers. diff --git a/metrics/auth.go b/metrics/auth.go index ae7906c..940e366 100644 --- a/metrics/auth.go +++ b/metrics/auth.go @@ -13,8 +13,8 @@ var ( Help: "Authentication attempts and results.", }, []string{ - "kind", // submission, imap, webmail, webaccount, webadmin (formerly httpaccount, httpadmin) - "variant", // login, plain, scram-sha-256, scram-sha-1, cram-md5, weblogin, websessionuse. formerly: httpbasic. + "kind", // submission, imap, webmail, webapi, webaccount, webadmin (formerly httpaccount, httpadmin) + "variant", // login, plain, scram-sha-256, scram-sha-1, cram-md5, weblogin, websessionuse, httpbasic. // todo: we currently only use badcreds, but known baduser can be helpful "result", // ok, baduser, badpassword, badcreds, error, aborted }, diff --git a/metrics/panic.go b/metrics/panic.go index 952601b..aaf0fe1 100644 --- a/metrics/panic.go +++ b/metrics/panic.go @@ -35,6 +35,7 @@ const ( Importmessages Panic = "importmessages" Store Panic = "store" Webadmin Panic = "webadmin" + Webapi Panic = "webapi" Webmailsendevent Panic = "webmailsendevent" Webmail Panic = "webmail" Webmailrequest Panic = "webmailrequest" diff --git a/mox-/admin.go b/mox-/admin.go index 3f71213..8056ce7 100644 --- a/mox-/admin.go +++ b/mox-/admin.go @@ -899,6 +899,7 @@ func AddressAdd(ctx context.Context, address, account string) (rerr error) { } // AddressRemove removes an email address and reloads the configuration. +// Address can be a catchall address for the domain of the form "@". func AddressRemove(ctx context.Context, address string) (rerr error) { log := pkglog.WithContext(ctx) defer func() { @@ -934,6 +935,52 @@ func AddressRemove(ctx context.Context, address string) (rerr error) { if !dropped { return fmt.Errorf("address not removed, likely a postmaster/reporting address") } + + // Also remove matching address from FromIDLoginAddresses, composing a new slice. + var fromIDLoginAddresses []string + var dom dns.Domain + var pa smtp.Address // For non-catchall addresses (most). + var err error + if strings.HasPrefix(address, "@") { + dom, err = dns.ParseDomain(address[1:]) + if err != nil { + return fmt.Errorf("parsing domain for catchall address: %v", err) + } + } else { + pa, err = smtp.ParseAddress(address) + if err != nil { + return fmt.Errorf("parsing address: %v", err) + } + dom = pa.Domain + } + for i, fa := range a.ParsedFromIDLoginAddresses { + if fa.Domain != dom { + // Keep for different domain. + fromIDLoginAddresses = append(fromIDLoginAddresses, a.FromIDLoginAddresses[i]) + continue + } + if strings.HasPrefix(address, "@") { + continue + } + dc, ok := Conf.Dynamic.Domains[dom.Name()] + if !ok { + return fmt.Errorf("unknown domain in fromid login address %q", fa.Pack(true)) + } + flp, err := CanonicalLocalpart(fa.Localpart, dc) + if err != nil { + return fmt.Errorf("getting canonical localpart for fromid login address %q: %v", fa.Localpart, err) + } + alp, err := CanonicalLocalpart(pa.Localpart, dc) + if err != nil { + return fmt.Errorf("getting canonical part for address: %v", err) + } + if alp != flp { + // Keep for different localpart. + fromIDLoginAddresses = append(fromIDLoginAddresses, a.FromIDLoginAddresses[i]) + } + } + na.FromIDLoginAddresses = fromIDLoginAddresses + nc := Conf.Dynamic nc.Accounts = map[string]config.Account{} for name, a := range Conf.Dynamic.Accounts { @@ -948,12 +995,16 @@ func AddressRemove(ctx context.Context, address string) (rerr error) { return nil } -// AccountFullNameSave updates the full name for an account and reloads the configuration. -func AccountFullNameSave(ctx context.Context, account, fullName string) (rerr error) { +// AccountSave updates the configuration of an account. Function xmodify is called +// with a shallow copy of the current configuration of the account. It must not +// change referencing fields (e.g. existing slice/map/pointer), they may still be +// in use, and the change may be rolled back. Referencing values must be copied and +// replaced by the modify. The function may raise a panic for error handling. +func AccountSave(ctx context.Context, account string, xmodify func(acc *config.Account)) (rerr error) { log := pkglog.WithContext(ctx) defer func() { if rerr != nil { - log.Errorx("saving account full name", rerr, slog.String("account", account)) + log.Errorx("saving account fields", rerr, slog.String("account", account)) } }() @@ -966,6 +1017,8 @@ func AccountFullNameSave(ctx context.Context, account, fullName string) (rerr er return fmt.Errorf("account not present") } + xmodify(&acc) + // Compose new config without modifying existing data structures. If we fail, we // leave no trace. nc := c @@ -973,100 +1026,12 @@ func AccountFullNameSave(ctx context.Context, account, fullName string) (rerr er for name, a := range c.Accounts { nc.Accounts[name] = a } - - acc.FullName = fullName nc.Accounts[account] = acc if err := writeDynamic(ctx, log, nc); err != nil { - return fmt.Errorf("writing domains.conf: %v", err) + return fmt.Errorf("writing domains.conf: %w", err) } - log.Info("account full name saved", slog.String("account", account)) - return nil -} - -// DestinationSave updates a destination for an account and reloads the configuration. -func DestinationSave(ctx context.Context, account, destName string, newDest config.Destination) (rerr error) { - log := pkglog.WithContext(ctx) - defer func() { - if rerr != nil { - log.Errorx("saving destination", rerr, - slog.String("account", account), - slog.String("destname", destName), - slog.Any("destination", newDest)) - } - }() - - Conf.dynamicMutex.Lock() - defer Conf.dynamicMutex.Unlock() - - c := Conf.Dynamic - acc, ok := c.Accounts[account] - if !ok { - return fmt.Errorf("account not present") - } - - if _, ok := acc.Destinations[destName]; !ok { - return fmt.Errorf("destination not present") - } - - // Compose new config without modifying existing data structures. If we fail, we - // leave no trace. - nc := c - nc.Accounts = map[string]config.Account{} - for name, a := range c.Accounts { - nc.Accounts[name] = a - } - nd := map[string]config.Destination{} - for dn, d := range acc.Destinations { - nd[dn] = d - } - nd[destName] = newDest - nacc := nc.Accounts[account] - nacc.Destinations = nd - nc.Accounts[account] = nacc - - if err := writeDynamic(ctx, log, nc); err != nil { - return fmt.Errorf("writing domains.conf: %v", err) - } - log.Info("destination saved", slog.String("account", account), slog.String("destname", destName)) - return nil -} - -// AccountAdminSettingsSave saves new account settings for an account only an admin can change. -func AccountAdminSettingsSave(ctx context.Context, account string, maxOutgoingMessagesPerDay, maxFirstTimeRecipientsPerDay int, quotaMessageSize int64, firstTimeSenderDelay bool) (rerr error) { - log := pkglog.WithContext(ctx) - defer func() { - if rerr != nil { - log.Errorx("saving admin account settings", rerr, slog.String("account", account)) - } - }() - - Conf.dynamicMutex.Lock() - defer Conf.dynamicMutex.Unlock() - - c := Conf.Dynamic - acc, ok := c.Accounts[account] - if !ok { - return fmt.Errorf("account not present") - } - - // Compose new config without modifying existing data structures. If we fail, we - // leave no trace. - nc := c - nc.Accounts = map[string]config.Account{} - for name, a := range c.Accounts { - nc.Accounts[name] = a - } - acc.MaxOutgoingMessagesPerDay = maxOutgoingMessagesPerDay - acc.MaxFirstTimeRecipientsPerDay = maxFirstTimeRecipientsPerDay - acc.QuotaMessageSize = quotaMessageSize - acc.NoFirstTimeSenderDelay = !firstTimeSenderDelay - nc.Accounts[account] = acc - - if err := writeDynamic(ctx, log, nc); err != nil { - return fmt.Errorf("writing domains.conf: %v", err) - } - log.Info("admin account settings saved", slog.String("account", account)) + log.Info("account fields saved", slog.String("account", account)) return nil } diff --git a/mox-/config.go b/mox-/config.go index 9df65af..4c02388 100644 --- a/mox-/config.go +++ b/mox-/config.go @@ -61,6 +61,8 @@ var ( Conf = Config{Log: map[string]slog.Level{"": slog.LevelError}} ) +var ErrConfig = errors.New("config error") + // Config as used in the code, a processed version of what is in the config file. // // Use methods to lookup a domain/account/address in the dynamic configuration. @@ -317,10 +319,11 @@ func (c *Config) allowACMEHosts(log mlog.Log, checkACMEHosts bool) { // todo future: write config parsing & writing code that can read a config and remembers the exact tokens including newlines and comments, and can write back a modified file. the goal is to be able to write a config file automatically (after changing fields through the ui), but not loose comments and whitespace, to still get useful diffs for storing the config in a version control system. // must be called with lock held. +// Returns ErrConfig if the configuration is not valid. func writeDynamic(ctx context.Context, log mlog.Log, c config.Dynamic) error { accDests, errs := prepareDynamicConfig(ctx, log, ConfigDynamicPath, Conf.Static, &c) if len(errs) > 0 { - return errs[0] + return fmt.Errorf("%w: %v", ErrConfig, errs[0]) } var b bytes.Buffer @@ -1272,8 +1275,52 @@ func prepareDynamicConfig(ctx context.Context, log mlog.Log, dynamicPath string, } acc.NotJunkMailbox = r } + + acc.ParsedFromIDLoginAddresses = make([]smtp.Address, len(acc.FromIDLoginAddresses)) + for i, s := range acc.FromIDLoginAddresses { + a, err := smtp.ParseAddress(s) + if err != nil { + addErrorf("invalid fromid login address %q in account %q: %v", s, accName, err) + } + // We check later on if address belongs to account. + dom, ok := c.Domains[a.Domain.Name()] + if !ok { + addErrorf("unknown domain in fromid login address %q for account %q", s, accName) + } else if dom.LocalpartCatchallSeparator == "" { + addErrorf("localpart catchall separator not configured for domain for fromid login address %q for account %q", s, accName) + } + acc.ParsedFromIDLoginAddresses[i] = a + } + c.Accounts[accName] = acc + if acc.OutgoingWebhook != nil { + u, err := url.Parse(acc.OutgoingWebhook.URL) + if err == nil && (u.Scheme != "http" && u.Scheme != "https") { + err = errors.New("scheme must be http or https") + } + if err != nil { + addErrorf("parsing outgoing hook url %q in account %q: %v", acc.OutgoingWebhook.URL, accName, err) + } + + // note: outgoing hook events are in ../queue/hooks.go, ../mox-/config.go, ../queue.go and ../webapi/gendoc.sh. keep in sync. + outgoingHookEvents := []string{"delivered", "suppressed", "delayed", "failed", "relayed", "expanded", "canceled", "unrecognized"} + for _, e := range acc.OutgoingWebhook.Events { + if !slices.Contains(outgoingHookEvents, e) { + addErrorf("unknown outgoing hook event %q", e) + } + } + } + if acc.IncomingWebhook != nil { + u, err := url.Parse(acc.IncomingWebhook.URL) + if err == nil && (u.Scheme != "http" && u.Scheme != "https") { + err = errors.New("scheme must be http or https") + } + if err != nil { + addErrorf("parsing incoming hook url %q in account %q: %v", acc.IncomingWebhook.URL, accName, err) + } + } + // todo deprecated: only localpart as keys for Destinations, we are replacing them with full addresses. if domains.conf is written, we won't have to do this again. replaceLocalparts := map[string]string{} @@ -1423,6 +1470,25 @@ func prepareDynamicConfig(ctx context.Context, log mlog.Log, dynamicPath string, } } + // Now that all addresses are parsed, check if all fromid login addresses match + // configured addresses. + for i, a := range acc.ParsedFromIDLoginAddresses { + // For domain catchall. + if _, ok := accDests["@"+a.Domain.Name()]; ok { + continue + } + dc := c.Domains[a.Domain.Name()] + lp, err := CanonicalLocalpart(a.Localpart, dc) + if err != nil { + addErrorf("canonicalizing localpart for fromid login address %q in account %q: %v", acc.FromIDLoginAddresses[i], accName, err) + continue + } + a.Localpart = lp + if _, ok := accDests[a.Pack(true)]; !ok { + addErrorf("fromid login address %q for account %q does not match its destination addresses", acc.FromIDLoginAddresses[i], accName) + } + } + checkRoutes("routes for account", acc.Routes) } diff --git a/mox-/fill.go b/mox-/fill.go new file mode 100644 index 0000000..9c96373 --- /dev/null +++ b/mox-/fill.go @@ -0,0 +1,113 @@ +package mox + +import ( + "reflect" +) + +// FillNil returns a modified value with nil maps/slices replaced with empty +// maps/slices. +func FillNil(rv reflect.Value) (nv reflect.Value, changed bool) { + switch rv.Kind() { + case reflect.Struct: + for i := 0; i < rv.NumField(); i++ { + if !rv.Type().Field(i).IsExported() { + continue + } + vv := rv.Field(i) + nvv, ch := FillNil(vv) + if ch && !rv.CanSet() { + // Make struct settable. + nrv := reflect.New(rv.Type()).Elem() + for j := 0; j < rv.NumField(); j++ { + nrv.Field(j).Set(rv.Field(j)) + } + rv = nrv + vv = rv.Field(i) + } + if ch { + changed = true + vv.Set(nvv) + } + } + case reflect.Slice: + if rv.IsNil() { + return reflect.MakeSlice(rv.Type(), 0, 0), true + } + n := rv.Len() + for i := 0; i < n; i++ { + rve := rv.Index(i) + nrv, ch := FillNil(rve) + if ch { + changed = true + rve.Set(nrv) + } + } + case reflect.Map: + if rv.IsNil() { + return reflect.MakeMap(rv.Type()), true + } + i := rv.MapRange() + for i.Next() { + erv, ch := FillNil(i.Value()) + if ch { + changed = true + rv.SetMapIndex(i.Key(), erv) + } + } + case reflect.Pointer: + if !rv.IsNil() { + FillNil(rv.Elem()) + } + } + return rv, changed +} + +// FillExample returns a modified value with nil/empty maps/slices/pointers values +// replaced with non-empty versions, for more helpful examples of types. Useful for +// documenting JSON representations of types. +func FillExample(seen []reflect.Type, rv reflect.Value) reflect.Value { + if seen == nil { + seen = make([]reflect.Type, 100) + } + + // Prevent recursive filling. + rvt := rv.Type() + index := -1 + for i, t := range seen { + if t == rvt { + return rv + } else if t == nil { + index = i + } + } + if index < 0 { + return rv + } + seen[index] = rvt + defer func() { + seen[index] = nil + }() + + switch rv.Kind() { + case reflect.Struct: + for i := 0; i < rv.NumField(); i++ { + if !rvt.Field(i).IsExported() { + continue + } + vv := rv.Field(i) + vv.Set(FillExample(seen, vv)) + } + case reflect.Slice: + ev := FillExample(seen, reflect.New(rvt.Elem()).Elem()) + return reflect.Append(rv, ev) + case reflect.Map: + vv := FillExample(seen, reflect.New(rvt.Elem()).Elem()) + nv := reflect.MakeMap(rvt) + nv.SetMapIndex(reflect.ValueOf("example"), vv) + return nv + case reflect.Pointer: + nv := reflect.New(rvt.Elem()) + return FillExample(seen, nv.Elem()).Addr() + } + return rv +} diff --git a/mox-/localserve.go b/mox-/localserve.go new file mode 100644 index 0000000..84d9cce --- /dev/null +++ b/mox-/localserve.go @@ -0,0 +1,31 @@ +package mox + +import ( + "strconv" + "strings" + + "github.com/mjl-/mox/smtp" +) + +func LocalserveNeedsError(lp smtp.Localpart) (code int, timeout bool) { + s := string(lp) + if strings.HasSuffix(s, "temperror") { + return smtp.C451LocalErr, false + } else if strings.HasSuffix(s, "permerror") { + return smtp.C550MailboxUnavail, false + } else if strings.HasSuffix(s, "timeout") { + return 0, true + } + if len(s) < 3 { + return 0, false + } + s = s[len(s)-3:] + v, err := strconv.ParseInt(s, 10, 32) + if err != nil { + return 0, false + } + if v < 400 || v > 600 { + return 0, false + } + return int(v), false +} diff --git a/moxio/limitreader.go b/moxio/limitreader.go index f025ecc..3d827c7 100644 --- a/moxio/limitreader.go +++ b/moxio/limitreader.go @@ -1,5 +1,7 @@ package moxio +// similar between ../moxio/limitreader.go and ../webapi/limitreader.go + import ( "errors" "io" diff --git a/moxvar/version.go b/moxvar/version.go index 8c6bac8..0141b86 100644 --- a/moxvar/version.go +++ b/moxvar/version.go @@ -8,12 +8,16 @@ import ( // Version is set at runtime based on the Go module used to build. var Version = "(devel)" +// VersionBare does not add a "+modifications" or other suffix to the version. +var VersionBare = "(devel)" + func init() { buildInfo, ok := debug.ReadBuildInfo() if !ok { return } Version = buildInfo.Main.Version + VersionBare = buildInfo.Main.Version if Version == "(devel)" { var vcsRev, vcsMod string for _, setting := range buildInfo.Settings { @@ -27,6 +31,7 @@ func init() { return } Version = vcsRev + VersionBare = vcsRev switch vcsMod { case "false": case "true": diff --git a/queue.go b/queue.go index c0890ef..3c86c77 100644 --- a/queue.go +++ b/queue.go @@ -14,6 +14,12 @@ import ( "github.com/mjl-/mox/queue" ) +func xctlwriteJSON(ctl *ctl, v any) { + fbuf, err := json.Marshal(v) + xcheckf(err, "marshal as json to ctl") + ctl.xwrite(string(fbuf)) +} + func cmdQueueHoldrulesList(c *cmd) { c.help = `List hold rules for the delivery queue. @@ -84,9 +90,9 @@ func ctlcmdQueueHoldrulesRemove(ctl *ctl, id int64) { ctl.xreadok() } -// flagFilter is used by many of the queue commands to accept flags for filtering -// the messages the operation applies to. -func flagFilter(fs *flag.FlagSet, f *queue.Filter) { +// flagFilterSort is used by many of the queue commands to accept flags for +// filtering the messages the operation applies to. +func flagFilterSort(fs *flag.FlagSet, f *queue.Filter, s *queue.Sort) { fs.Func("ids", "comma-separated list of message IDs", func(v string) error { for _, s := range strings.Split(v, ",") { id, err := strconv.ParseInt(s, 10, 64) @@ -97,6 +103,7 @@ func flagFilter(fs *flag.FlagSet, f *queue.Filter) { } return nil }) + fs.IntVar(&f.Max, "n", 0, "number of messages to return") fs.StringVar(&f.Account, "account", "", "account that queued the message") fs.StringVar(&f.From, "from", "", `from address of message, use "@example.com" to match all messages for a domain`) fs.StringVar(&f.To, "to", "", `recipient address of message, use "@example.com" to match all messages for a domain`) @@ -118,32 +125,93 @@ func flagFilter(fs *flag.FlagSet, f *queue.Filter) { f.Hold = &hold return nil }) + if s != nil { + fs.Func("sort", `field to sort by, "nextattempt" (default) or "queued"`, func(v string) error { + switch v { + case "nextattempt": + s.Field = "NextAttempt" + case "queued": + s.Field = "Queued" + default: + return fmt.Errorf("unknown value %q", v) + } + return nil + }) + fs.BoolVar(&s.Asc, "asc", false, "sort ascending instead of descending (default)") + } +} + +// flagRetiredFilterSort has filters for retired messages. +func flagRetiredFilterSort(fs *flag.FlagSet, f *queue.RetiredFilter, s *queue.RetiredSort) { + fs.Func("ids", "comma-separated list of retired message IDs", func(v string) error { + for _, s := range strings.Split(v, ",") { + id, err := strconv.ParseInt(s, 10, 64) + if err != nil { + return err + } + f.IDs = append(f.IDs, id) + } + return nil + }) + fs.IntVar(&f.Max, "n", 0, "number of messages to return") + fs.StringVar(&f.Account, "account", "", "account that queued the message") + fs.StringVar(&f.From, "from", "", `from address of message, use "@example.com" to match all messages for a domain`) + fs.StringVar(&f.To, "to", "", `recipient address of message, use "@example.com" to match all messages for a domain`) + fs.StringVar(&f.Submitted, "submitted", "", `filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now)`) + fs.StringVar(&f.LastActivity, "lastactivity", "", `filter by time of last activity relative to now, value must start with "<" (before now) or ">" (after now)`) + fs.Func("transport", "transport to use for messages, empty string sets the default behaviour", func(v string) error { + f.Transport = &v + return nil + }) + fs.Func("result", `"success" or "failure" as result of delivery`, func(v string) error { + switch v { + case "success": + t := true + f.Success = &t + case "failure": + t := false + f.Success = &t + default: + return fmt.Errorf("bad argument %q, need success or failure", v) + } + return nil + }) + if s != nil { + fs.Func("sort", `field to sort by, "lastactivity" (default) or "queued"`, func(v string) error { + switch v { + case "lastactivity": + s.Field = "LastActivity" + case "queued": + s.Field = "Queued" + default: + return fmt.Errorf("unknown value %q", v) + } + return nil + }) + fs.BoolVar(&s.Asc, "asc", false, "sort ascending instead of descending (default)") + } } func cmdQueueList(c *cmd) { - c.params = "[filterflags]" + c.params = "[filtersortflags]" c.help = `List matching messages in the delivery queue. Prints the message with its ID, last and next delivery attempts, last error. ` var f queue.Filter - flagFilter(c.flag, &f) + var s queue.Sort + flagFilterSort(c.flag, &f, &s) if len(c.Parse()) != 0 { c.Usage() } mustLoadConfig() - ctlcmdQueueList(xctl(), f) + ctlcmdQueueList(xctl(), f, s) } -func xctlwritequeuefilter(ctl *ctl, f queue.Filter) { - fbuf, err := json.Marshal(f) - xcheckf(err, "marshal filter") - ctl.xwrite(string(fbuf)) -} - -func ctlcmdQueueList(ctl *ctl, f queue.Filter) { +func ctlcmdQueueList(ctl *ctl, f queue.Filter, s queue.Sort) { ctl.xwrite("queuelist") - xctlwritequeuefilter(ctl, f) + xctlwriteJSON(ctl, f) + xctlwriteJSON(ctl, s) ctl.xreadok() if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { log.Fatalf("%s", err) @@ -158,7 +226,7 @@ Messages that are on hold are not delivered until marked as off hold again, or otherwise handled by the admin. ` var f queue.Filter - flagFilter(c.flag, &f) + flagFilterSort(c.flag, &f, nil) if len(c.Parse()) != 0 { c.Usage() } @@ -174,7 +242,7 @@ Once off hold, messages can be delivered according to their current next delivery attempt. See the "queue schedule" command. ` var f queue.Filter - flagFilter(c.flag, &f) + flagFilterSort(c.flag, &f, nil) if len(c.Parse()) != 0 { c.Usage() } @@ -184,7 +252,7 @@ delivery attempt. See the "queue schedule" command. func ctlcmdQueueHoldSet(ctl *ctl, f queue.Filter, hold bool) { ctl.xwrite("queueholdset") - xctlwritequeuefilter(ctl, f) + xctlwriteJSON(ctl, f) if hold { ctl.xwrite("true") } else { @@ -199,7 +267,7 @@ func ctlcmdQueueHoldSet(ctl *ctl, f queue.Filter, hold bool) { } func cmdQueueSchedule(c *cmd) { - c.params = "[filterflags] duration" + c.params = "[filterflags] [-now] duration" c.help = `Change next delivery attempt for matching messages. The next delivery attempt is adjusted by the duration parameter. If the -now @@ -211,7 +279,7 @@ Schedule immediate delivery with "mox queue schedule -now 0". var fromNow bool c.flag.BoolVar(&fromNow, "now", false, "schedule for duration relative to current time instead of relative to current next delivery attempt for messages") var f queue.Filter - flagFilter(c.flag, &f) + flagFilterSort(c.flag, &f, nil) args := c.Parse() if len(args) != 1 { c.Usage() @@ -224,7 +292,7 @@ Schedule immediate delivery with "mox queue schedule -now 0". func ctlcmdQueueSchedule(ctl *ctl, f queue.Filter, fromNow bool, d time.Duration) { ctl.xwrite("queueschedule") - xctlwritequeuefilter(ctl, f) + xctlwriteJSON(ctl, f) if fromNow { ctl.xwrite("yes") } else { @@ -233,7 +301,7 @@ func ctlcmdQueueSchedule(ctl *ctl, f queue.Filter, fromNow bool, d time.Duration ctl.xwrite(d.String()) line := ctl.xread() if line == "ok" { - fmt.Printf("%s messages rescheduled\n", ctl.xread()) + fmt.Printf("%s message(s) rescheduled\n", ctl.xread()) } else { log.Fatalf("%s", line) } @@ -249,7 +317,7 @@ configured transport assigned to use for delivery, e.g. using submission to another mail server or with connections over a SOCKS proxy. ` var f queue.Filter - flagFilter(c.flag, &f) + flagFilterSort(c.flag, &f, nil) args := c.Parse() if len(args) != 1 { c.Usage() @@ -260,11 +328,11 @@ another mail server or with connections over a SOCKS proxy. func ctlcmdQueueTransport(ctl *ctl, f queue.Filter, transport string) { ctl.xwrite("queuetransport") - xctlwritequeuefilter(ctl, f) + xctlwriteJSON(ctl, f) ctl.xwrite(transport) line := ctl.xread() if line == "ok" { - fmt.Printf("%s messages changed\n", ctl.xread()) + fmt.Printf("%s message(s) changed\n", ctl.xread()) } else { log.Fatalf("%s", line) } @@ -285,7 +353,7 @@ Value "default" is the default behaviour, currently for unverified opportunistic TLS. ` var f queue.Filter - flagFilter(c.flag, &f) + flagFilterSort(c.flag, &f, nil) args := c.Parse() if len(args) != 1 { c.Usage() @@ -308,7 +376,7 @@ TLS. func ctlcmdQueueRequireTLS(ctl *ctl, f queue.Filter, tlsreq *bool) { ctl.xwrite("queuerequiretls") - xctlwritequeuefilter(ctl, f) + xctlwriteJSON(ctl, f) var req string if tlsreq == nil { req = "" @@ -320,7 +388,7 @@ func ctlcmdQueueRequireTLS(ctl *ctl, f queue.Filter, tlsreq *bool) { ctl.xwrite(req) line := ctl.xread() if line == "ok" { - fmt.Printf("%s messages changed\n", ctl.xread()) + fmt.Printf("%s message(s) changed\n", ctl.xread()) } else { log.Fatalf("%s", line) } @@ -335,7 +403,7 @@ delivery attempts failed. The DSN (delivery status notification) message contains a line saying the message was canceled by the admin. ` var f queue.Filter - flagFilter(c.flag, &f) + flagFilterSort(c.flag, &f, nil) if len(c.Parse()) != 0 { c.Usage() } @@ -345,10 +413,10 @@ contains a line saying the message was canceled by the admin. func ctlcmdQueueFail(ctl *ctl, f queue.Filter) { ctl.xwrite("queuefail") - xctlwritequeuefilter(ctl, f) + xctlwriteJSON(ctl, f) line := ctl.xread() if line == "ok" { - fmt.Printf("%s messages marked as failed\n", ctl.xread()) + fmt.Printf("%s message(s) marked as failed\n", ctl.xread()) } else { log.Fatalf("%s", line) } @@ -362,7 +430,7 @@ Dangerous operation, this completely removes the message. If you want to store the message, use "queue dump" before removing. ` var f queue.Filter - flagFilter(c.flag, &f) + flagFilterSort(c.flag, &f, nil) if len(c.Parse()) != 0 { c.Usage() } @@ -372,10 +440,10 @@ the message, use "queue dump" before removing. func ctlcmdQueueDrop(ctl *ctl, f queue.Filter) { ctl.xwrite("queuedrop") - xctlwritequeuefilter(ctl, f) + xctlwriteJSON(ctl, f) line := ctl.xread() if line == "ok" { - fmt.Printf("%s messages dropped\n", ctl.xread()) + fmt.Printf("%s message(s) dropped\n", ctl.xread()) } else { log.Fatalf("%s", line) } @@ -403,3 +471,381 @@ func ctlcmdQueueDump(ctl *ctl, id string) { log.Fatalf("%s", err) } } + +func cmdQueueSuppressList(c *cmd) { + c.params = "[-account account]" + c.help = `Print addresses in suppression list.` + var account string + c.flag.StringVar(&account, "account", "", "only show suppression list for this account") + args := c.Parse() + if len(args) != 0 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueSuppressList(xctl(), account) +} + +func ctlcmdQueueSuppressList(ctl *ctl, account string) { + ctl.xwrite("queuesuppresslist") + ctl.xwrite(account) + ctl.xreadok() + if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { + log.Fatalf("%s", err) + } +} + +func cmdQueueSuppressAdd(c *cmd) { + c.params = "account address" + c.help = `Add address to suppression list for account.` + args := c.Parse() + if len(args) != 2 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueSuppressAdd(xctl(), args[0], args[1]) +} + +func ctlcmdQueueSuppressAdd(ctl *ctl, account, address string) { + ctl.xwrite("queuesuppressadd") + ctl.xwrite(account) + ctl.xwrite(address) + ctl.xreadok() +} + +func cmdQueueSuppressRemove(c *cmd) { + c.params = "account address" + c.help = `Remove address from suppression list for account.` + args := c.Parse() + if len(args) != 2 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueSuppressRemove(xctl(), args[0], args[1]) +} + +func ctlcmdQueueSuppressRemove(ctl *ctl, account, address string) { + ctl.xwrite("queuesuppressremove") + ctl.xwrite(account) + ctl.xwrite(address) + ctl.xreadok() +} + +func cmdQueueSuppressLookup(c *cmd) { + c.params = "[-account account] address" + c.help = `Check if address is present in suppression list, for any or specific account.` + var account string + c.flag.StringVar(&account, "account", "", "only check address in specified account") + args := c.Parse() + if len(args) != 1 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueSuppressLookup(xctl(), account, args[0]) +} + +func ctlcmdQueueSuppressLookup(ctl *ctl, account, address string) { + ctl.xwrite("queuesuppresslookup") + ctl.xwrite(account) + ctl.xwrite(address) + ctl.xreadok() + if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { + log.Fatalf("%s", err) + } +} + +func cmdQueueRetiredList(c *cmd) { + c.params = "[filtersortflags]" + c.help = `List matching messages in the retired queue. + +Prints messages with their ID and results. +` + var f queue.RetiredFilter + var s queue.RetiredSort + flagRetiredFilterSort(c.flag, &f, &s) + if len(c.Parse()) != 0 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueRetiredList(xctl(), f, s) +} + +func ctlcmdQueueRetiredList(ctl *ctl, f queue.RetiredFilter, s queue.RetiredSort) { + ctl.xwrite("queueretiredlist") + xctlwriteJSON(ctl, f) + xctlwriteJSON(ctl, s) + ctl.xreadok() + if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { + log.Fatalf("%s", err) + } +} + +func cmdQueueRetiredPrint(c *cmd) { + c.params = "id" + c.help = `Print a message from the retired queue. + +Prints a JSON representation of the information from the retired queue. +` + args := c.Parse() + if len(args) != 1 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueRetiredPrint(xctl(), args[0]) +} + +func ctlcmdQueueRetiredPrint(ctl *ctl, id string) { + ctl.xwrite("queueretiredprint") + ctl.xwrite(id) + ctl.xreadok() + if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { + log.Fatalf("%s", err) + } +} + +// note: outgoing hook events are in queue/hooks.go, mox-/config.go, queue.go and webapi/gendoc.sh. keep in sync. + +// flagHookFilterSort is used by many of the queue commands to accept flags for +// filtering the webhooks the operation applies to. +func flagHookFilterSort(fs *flag.FlagSet, f *queue.HookFilter, s *queue.HookSort) { + fs.Func("ids", "comma-separated list of webhook IDs", func(v string) error { + for _, s := range strings.Split(v, ",") { + id, err := strconv.ParseInt(s, 10, 64) + if err != nil { + return err + } + f.IDs = append(f.IDs, id) + } + return nil + }) + fs.IntVar(&f.Max, "n", 0, "number of webhooks to return") + fs.StringVar(&f.Account, "account", "", "account that queued the message/webhook") + fs.StringVar(&f.Submitted, "submitted", "", `filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now)`) + fs.StringVar(&f.NextAttempt, "nextattempt", "", `filter by time of next delivery attempt relative to now, value must start with "<" (before now) or ">" (after now)`) + fs.Func("event", `event this webhook is about: incoming, delivered, suppressed, delayed, failed, relayed, expanded, canceled, unrecognized`, func(v string) error { + switch v { + case "incoming", "delivered", "suppressed", "delayed", "failed", "relayed", "expanded", "canceled", "unrecognized": + f.Event = v + default: + return fmt.Errorf("invalid parameter %q", v) + } + return nil + }) + if s != nil { + fs.Func("sort", `field to sort by, "nextattempt" (default) or "queued"`, func(v string) error { + switch v { + case "nextattempt": + s.Field = "NextAttempt" + case "queued": + s.Field = "Queued" + default: + return fmt.Errorf("unknown value %q", v) + } + return nil + }) + fs.BoolVar(&s.Asc, "asc", false, "sort ascending instead of descending (default)") + } +} + +// flagHookRetiredFilterSort is used by many of the queue commands to accept flags +// for filtering the webhooks the operation applies to. +func flagHookRetiredFilterSort(fs *flag.FlagSet, f *queue.HookRetiredFilter, s *queue.HookRetiredSort) { + fs.Func("ids", "comma-separated list of retired webhook IDs", func(v string) error { + for _, s := range strings.Split(v, ",") { + id, err := strconv.ParseInt(s, 10, 64) + if err != nil { + return err + } + f.IDs = append(f.IDs, id) + } + return nil + }) + fs.IntVar(&f.Max, "n", 0, "number of webhooks to return") + fs.StringVar(&f.Account, "account", "", "account that queued the message/webhook") + fs.StringVar(&f.Submitted, "submitted", "", `filter by time of submission relative to now, value must start with "<" (before now) or ">" (after now)`) + fs.StringVar(&f.LastActivity, "lastactivity", "", `filter by time of last activity relative to now, value must start with "<" (before now) or ">" (after now)`) + fs.Func("event", `event this webhook is about: incoming, delivered, suppressed, delayed, failed, relayed, expanded, canceled, unrecognized`, func(v string) error { + switch v { + case "incoming", "delivered", "suppressed", "delayed", "failed", "relayed", "expanded", "canceled", "unrecognized": + f.Event = v + default: + return fmt.Errorf("invalid parameter %q", v) + } + return nil + }) + if s != nil { + fs.Func("sort", `field to sort by, "lastactivity" (default) or "queued"`, func(v string) error { + switch v { + case "lastactivity": + s.Field = "LastActivity" + case "queued": + s.Field = "Queued" + default: + return fmt.Errorf("unknown value %q", v) + } + return nil + }) + fs.BoolVar(&s.Asc, "asc", false, "sort ascending instead of descending (default)") + } +} + +func cmdQueueHookList(c *cmd) { + c.params = "[filtersortflags]" + c.help = `List matching webhooks in the queue. + +Prints list of webhooks, their IDs and basic information. +` + var f queue.HookFilter + var s queue.HookSort + flagHookFilterSort(c.flag, &f, &s) + if len(c.Parse()) != 0 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueHookList(xctl(), f, s) +} + +func ctlcmdQueueHookList(ctl *ctl, f queue.HookFilter, s queue.HookSort) { + ctl.xwrite("queuehooklist") + xctlwriteJSON(ctl, f) + xctlwriteJSON(ctl, s) + ctl.xreadok() + if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { + log.Fatalf("%s", err) + } +} + +func cmdQueueHookSchedule(c *cmd) { + c.params = "[filterflags] duration" + c.help = `Change next delivery attempt for matching webhooks. + +The next delivery attempt is adjusted by the duration parameter. If the -now +flag is set, the new delivery attempt is set to the duration added to the +current time, instead of added to the current scheduled time. + +Schedule immediate delivery with "mox queue schedule -now 0". +` + var fromNow bool + c.flag.BoolVar(&fromNow, "now", false, "schedule for duration relative to current time instead of relative to current next delivery attempt for webhooks") + var f queue.HookFilter + flagHookFilterSort(c.flag, &f, nil) + args := c.Parse() + if len(args) != 1 { + c.Usage() + } + d, err := time.ParseDuration(args[0]) + xcheckf(err, "parsing duration %q", args[0]) + mustLoadConfig() + ctlcmdQueueHookSchedule(xctl(), f, fromNow, d) +} + +func ctlcmdQueueHookSchedule(ctl *ctl, f queue.HookFilter, fromNow bool, d time.Duration) { + ctl.xwrite("queuehookschedule") + xctlwriteJSON(ctl, f) + if fromNow { + ctl.xwrite("yes") + } else { + ctl.xwrite("") + } + ctl.xwrite(d.String()) + line := ctl.xread() + if line == "ok" { + fmt.Printf("%s webhook(s) rescheduled\n", ctl.xread()) + } else { + log.Fatalf("%s", line) + } +} + +func cmdQueueHookCancel(c *cmd) { + c.params = "[filterflags]" + c.help = `Fail delivery of matching webhooks.` + var f queue.HookFilter + flagHookFilterSort(c.flag, &f, nil) + if len(c.Parse()) != 0 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueHookCancel(xctl(), f) +} + +func ctlcmdQueueHookCancel(ctl *ctl, f queue.HookFilter) { + ctl.xwrite("queuehookcancel") + xctlwriteJSON(ctl, f) + line := ctl.xread() + if line == "ok" { + fmt.Printf("%s webhook(s)s marked as canceled\n", ctl.xread()) + } else { + log.Fatalf("%s", line) + } +} + +func cmdQueueHookPrint(c *cmd) { + c.params = "id" + c.help = `Print details of a webhook from the queue. + +The webhook is printed to stdout as JSON. +` + args := c.Parse() + if len(args) != 1 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueHookPrint(xctl(), args[0]) +} + +func ctlcmdQueueHookPrint(ctl *ctl, id string) { + ctl.xwrite("queuehookprint") + ctl.xwrite(id) + ctl.xreadok() + if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { + log.Fatalf("%s", err) + } +} + +func cmdQueueHookRetiredList(c *cmd) { + c.params = "[filtersortflags]" + c.help = `List matching webhooks in the retired queue. + +Prints list of retired webhooks, their IDs and basic information. +` + var f queue.HookRetiredFilter + var s queue.HookRetiredSort + flagHookRetiredFilterSort(c.flag, &f, &s) + if len(c.Parse()) != 0 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueHookRetiredList(xctl(), f, s) +} + +func ctlcmdQueueHookRetiredList(ctl *ctl, f queue.HookRetiredFilter, s queue.HookRetiredSort) { + ctl.xwrite("queuehookretiredlist") + xctlwriteJSON(ctl, f) + xctlwriteJSON(ctl, s) + ctl.xreadok() + if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { + log.Fatalf("%s", err) + } +} + +func cmdQueueHookRetiredPrint(c *cmd) { + c.params = "id" + c.help = `Print details of a webhook from the retired queue. + +The retired webhook is printed to stdout as JSON. +` + args := c.Parse() + if len(args) != 1 { + c.Usage() + } + mustLoadConfig() + ctlcmdQueueHookRetiredPrint(xctl(), args[0]) +} + +func ctlcmdQueueHookRetiredPrint(ctl *ctl, id string) { + ctl.xwrite("queuehookretiredprint") + ctl.xwrite(id) + ctl.xreadok() + if _, err := io.Copy(os.Stdout, ctl.reader()); err != nil { + log.Fatalf("%s", err) + } +} diff --git a/queue/direct.go b/queue/direct.go index 5b4af46..3e2788c 100644 --- a/queue/direct.go +++ b/queue/direct.go @@ -30,6 +30,7 @@ import ( "github.com/mjl-/mox/smtpclient" "github.com/mjl-/mox/store" "github.com/mjl-/mox/tlsrpt" + "github.com/mjl-/mox/webhook" ) // Increased each time an outgoing connection is made for direct delivery. Used by @@ -155,7 +156,7 @@ func deliverDirect(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale if permanent { err = smtpclient.Error{Permanent: true, Err: err} } - fail(ctx, qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, err) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, err) return } @@ -175,7 +176,7 @@ func deliverDirect(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale } else { qlog.Infox("mtasts lookup temporary error, aborting delivery attempt", err, slog.Any("domain", origNextHop)) recipientDomainResult.Summary.TotalFailureSessionCount++ - fail(ctx, qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, err) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, err) return } } @@ -298,19 +299,39 @@ func deliverDirect(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale continue } - delIDs := make([]int64, len(result.delivered)) + delMsgs := make([]Msg, len(result.delivered)) for i, mr := range result.delivered { mqlog := nqlog.With(slog.Int64("msgid", mr.msg.ID), slog.Any("recipient", mr.msg.Recipient())) mqlog.Info("delivered from queue") - delIDs[i] = mr.msg.ID + mr.msg.markResult(0, "", "", true) + delMsgs[i] = *mr.msg } - if len(delIDs) > 0 { - if err := queueDelete(context.Background(), delIDs...); err != nil { - nqlog.Errorx("deleting messages from queue after delivery", err) + if len(delMsgs) > 0 { + err := DB.Write(context.Background(), func(tx *bstore.Tx) error { + return retireMsgs(nqlog, tx, webhook.EventDelivered, 0, "", nil, delMsgs...) + }) + if err != nil { + nqlog.Errorx("deleting messages from queue database after delivery", err) + } else if err := removeMsgsFS(nqlog, delMsgs...); err != nil { + nqlog.Errorx("removing queued messages from file system after delivery", err) } + kick() } - for _, mr := range result.failed { - fail(ctx, nqlog, []*Msg{mr.msg}, m0.DialedIPs, backoff, remoteMTA, smtpclient.Error(mr.resp)) + if len(result.failed) > 0 { + err := DB.Write(context.Background(), func(tx *bstore.Tx) error { + for _, mr := range result.failed { + failMsgsTx(nqlog, tx, []*Msg{mr.msg}, m0.DialedIPs, backoff, remoteMTA, smtpclient.Error(mr.resp)) + } + return nil + }) + if err != nil { + for _, mr := range result.failed { + nqlog.Errorx("error processing delivery failure for messages", err, + slog.Int64("msgid", mr.msg.ID), + slog.Any("recipient", mr.msg.Recipient())) + } + } + kick() } return } @@ -335,11 +356,11 @@ func deliverDirect(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale Secode: smtp.SePol7MissingReqTLS30, Err: fmt.Errorf("destination servers do not support requiretls"), } - fail(ctx, qlog, msgs, m0.DialedIPs, backoff, remoteMTA, err) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, remoteMTA, err) return } - fail(ctx, qlog, msgs, m0.DialedIPs, backoff, remoteMTA, lastErr) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, remoteMTA, lastErr) return } diff --git a/queue/dsn.go b/queue/dsn.go index eef9c95..d79068a 100644 --- a/queue/dsn.go +++ b/queue/dsn.go @@ -8,6 +8,7 @@ import ( "log/slog" "net" "os" + "slices" "strings" "time" @@ -24,6 +25,7 @@ import ( "github.com/mjl-/mox/smtp" "github.com/mjl-/mox/smtpclient" "github.com/mjl-/mox/store" + "github.com/mjl-/mox/webhook" ) var ( @@ -35,8 +37,32 @@ var ( ) ) -// todo: rename function, perhaps put some of the params in a delivery struct so we don't pass all the params all the time? -func fail(ctx context.Context, qlog mlog.Log, msgs []*Msg, dialedIPs map[string][]net.IP, backoff time.Duration, remoteMTA dsn.NameIP, err error) { +// failMsgsDB calls failMsgsTx with a new transaction, logging transaction errors. +func failMsgsDB(qlog mlog.Log, msgs []*Msg, dialedIPs map[string][]net.IP, backoff time.Duration, remoteMTA dsn.NameIP, err error) { + xerr := DB.Write(context.Background(), func(tx *bstore.Tx) error { + failMsgsTx(qlog, tx, msgs, dialedIPs, backoff, remoteMTA, err) + return nil + }) + if xerr != nil { + for _, m := range msgs { + qlog.Errorx("error marking delivery as failed", xerr, + slog.String("delivererr", err.Error()), + slog.Int64("msgid", m.ID), + slog.Any("recipient", m.Recipient()), + slog.Duration("backoff", backoff), + slog.Time("nextattempt", m.NextAttempt)) + } + } + kick() +} + +// todo: perhaps put some of the params in a delivery struct so we don't pass all the params all the time? + +// failMsgsTx processes a failure to deliver msgs. If the error is permanent, a DSN +// is delivered to the sender account. +// Caller must call kick() after commiting the transaction for any (re)scheduling +// of messages and webhooks. +func failMsgsTx(qlog mlog.Log, tx *bstore.Tx, msgs []*Msg, dialedIPs map[string][]net.IP, backoff time.Duration, remoteMTA dsn.NameIP, err error) { // todo future: when we implement relaying, we should be able to send DSNs to non-local users. and possibly specify a null mailfrom. ../rfc/5321:1503 // todo future: when we implement relaying, and a dsn cannot be delivered, and requiretls was active, we cannot drop the message. instead deliver to local postmaster? though ../rfc/8689:383 may intend to say the dsn should be delivered without requiretls? // todo future: when we implement smtp dsn extension, parameter RET=FULL must be disregarded for messages with REQUIRETLS. ../rfc/8689:379 @@ -49,6 +75,7 @@ func fail(ctx context.Context, qlog mlog.Log, msgs []*Msg, dialedIPs map[string] var errmsg = err.Error() var code int var secodeOpt string + var event webhook.OutgoingEvent if errors.As(err, &cerr) { if cerr.Line != "" { smtpLines = append([]string{cerr.Line}, cerr.MoreLines...) @@ -69,22 +96,56 @@ func fail(ctx context.Context, qlog mlog.Log, msgs []*Msg, dialedIPs map[string] } if permanent || m0.MaxAttempts == 0 && m0.Attempts >= 8 || m0.MaxAttempts > 0 && m0.Attempts >= m0.MaxAttempts { - for _, m := range msgs { - qmlog := qlog.With(slog.Int64("msgid", m.ID), slog.Any("recipient", m.Recipient())) - qmlog.Errorx("permanent failure delivering from queue", err) - deliverDSNFailure(ctx, qmlog, *m, remoteMTA, secodeOpt, errmsg, smtpLines) + event = webhook.EventFailed + if errors.Is(err, errSuppressed) { + event = webhook.EventSuppressed } - if err := queueDelete(context.Background(), ids...); err != nil { - qlog.Errorx("deleting messages from queue after permanent failure", err) - } - return - } - // All messages should have the same DialedIPs, so we can update them all at once. - qup := bstore.QueryDB[Msg](context.Background(), DB) - qup.FilterIDs(ids) - if _, xerr := qup.UpdateNonzero(Msg{LastError: errmsg, DialedIPs: dialedIPs}); err != nil { - qlog.Errorx("storing delivery error", xerr, slog.String("deliveryerror", errmsg)) + rmsgs := make([]Msg, len(msgs)) + var scl []suppressionCheck + for i, m := range msgs { + rm := *m + rm.DialedIPs = dialedIPs + rm.markResult(code, secodeOpt, errmsg, false) + + qmlog := qlog.With(slog.Int64("msgid", rm.ID), slog.Any("recipient", m.Recipient())) + qmlog.Errorx("permanent failure delivering from queue", err) + deliverDSNFailure(qmlog, rm, remoteMTA, secodeOpt, errmsg, smtpLines) + + rmsgs[i] = rm + + // If this was an smtp error from remote, we'll pass the failure to the + // suppression list. + if code == 0 { + continue + } + sc := suppressionCheck{ + MsgID: rm.ID, + Account: rm.SenderAccount, + Recipient: rm.Recipient(), + Code: code, + Secode: secodeOpt, + Source: "queue", + } + scl = append(scl, sc) + } + var suppressedMsgIDs []int64 + if len(scl) > 0 { + var err error + suppressedMsgIDs, err = suppressionProcess(qlog, tx, scl...) + if err != nil { + qlog.Errorx("processing delivery failure in suppression list", err) + return + } + } + err := retireMsgs(qlog, tx, event, code, secodeOpt, suppressedMsgIDs, rmsgs...) + if err != nil { + qlog.Errorx("deleting queue messages from database after permanent failure", err) + } else if err := removeMsgsFS(qlog, rmsgs...); err != nil { + qlog.Errorx("remove queue messages from file system after permanent failure", err) + } + + return } if m0.Attempts == 5 { @@ -95,7 +156,7 @@ func fail(ctx context.Context, qlog mlog.Log, msgs []*Msg, dialedIPs map[string] for _, m := range msgs { qmlog := qlog.With(slog.Int64("msgid", m.ID), slog.Any("recipient", m.Recipient())) qmlog.Errorx("temporary failure delivering from queue, sending delayed dsn", err, slog.Duration("backoff", backoff)) - deliverDSNDelay(ctx, qmlog, *m, remoteMTA, secodeOpt, errmsg, smtpLines, retryUntil) + deliverDSNDelay(qmlog, *m, remoteMTA, secodeOpt, errmsg, smtpLines, retryUntil) } } else { for _, m := range msgs { @@ -106,9 +167,53 @@ func fail(ctx context.Context, qlog mlog.Log, msgs []*Msg, dialedIPs map[string] slog.Time("nextattempt", m0.NextAttempt)) } } + + process := func() error { + // Update DialedIPs in message, and record the result. + qup := bstore.QueryTx[Msg](tx) + qup.FilterIDs(ids) + umsgs, err := qup.List() + if err != nil { + return fmt.Errorf("retrieving messages for marking temporary delivery error: %v", err) + } + for _, um := range umsgs { + // All messages should have the same DialedIPs. + um.DialedIPs = dialedIPs + um.markResult(code, secodeOpt, errmsg, false) + if err := tx.Update(&um); err != nil { + return fmt.Errorf("updating message after temporary failure to deliver: %v", err) + } + } + + // If configured, we'll queue webhooks for delivery. + accConf, ok := mox.Conf.Account(m0.SenderAccount) + if !(ok && accConf.OutgoingWebhook != nil && (len(accConf.OutgoingWebhook.Events) == 0 || slices.Contains(accConf.OutgoingWebhook.Events, string(webhook.EventDelayed)))) { + return nil + } + + hooks := make([]Hook, len(msgs)) + for i, m := range msgs { + var err error + hooks[i], err = hookCompose(*m, accConf.OutgoingWebhook.URL, accConf.OutgoingWebhook.Authorization, webhook.EventDelayed, false, code, secodeOpt) + if err != nil { + return fmt.Errorf("composing webhook for failed delivery attempt for msg id %d: %v", m.ID, err) + } + } + now := time.Now() + for i := range hooks { + if err := hookInsert(tx, &hooks[i], now, accConf.KeepRetiredWebhookPeriod); err != nil { + return fmt.Errorf("inserting webhook into queue: %v", err) + } + qlog.Debug("queueing webhook for temporary delivery errors", hooks[i].attrs()...) + } + return nil + } + if err := process(); err != nil { + qlog.Errorx("processing temporary delivery error", err, slog.String("deliveryerror", errmsg)) + } } -func deliverDSNFailure(ctx context.Context, log mlog.Log, m Msg, remoteMTA dsn.NameIP, secodeOpt, errmsg string, smtpLines []string) { +func deliverDSNFailure(log mlog.Log, m Msg, remoteMTA dsn.NameIP, secodeOpt, errmsg string, smtpLines []string) { const subject = "mail delivery failed" message := fmt.Sprintf(` Delivery has failed permanently for your email to: @@ -125,10 +230,10 @@ Error during the last delivery attempt: message += "\nFull SMTP response:\n\n\t" + strings.Join(smtpLines, "\n\t") + "\n" } - deliverDSN(ctx, log, m, remoteMTA, secodeOpt, errmsg, smtpLines, true, nil, subject, message) + deliverDSN(log, m, remoteMTA, secodeOpt, errmsg, smtpLines, true, nil, subject, message) } -func deliverDSNDelay(ctx context.Context, log mlog.Log, m Msg, remoteMTA dsn.NameIP, secodeOpt, errmsg string, smtpLines []string, retryUntil time.Time) { +func deliverDSNDelay(log mlog.Log, m Msg, remoteMTA dsn.NameIP, secodeOpt, errmsg string, smtpLines []string, retryUntil time.Time) { // Should not happen, but doesn't hurt to prevent sending delayed delivery // notifications for DMARC reports. We don't want to waste postmaster attention. if m.IsDMARCReport { @@ -152,14 +257,14 @@ Error during the last delivery attempt: message += "\nFull SMTP response:\n\n\t" + strings.Join(smtpLines, "\n\t") + "\n" } - deliverDSN(ctx, log, m, remoteMTA, secodeOpt, errmsg, smtpLines, false, &retryUntil, subject, message) + deliverDSN(log, m, remoteMTA, secodeOpt, errmsg, smtpLines, false, &retryUntil, subject, message) } // We only queue DSNs for delivery failures for emails submitted by authenticated // users. So we are delivering to local users. ../rfc/5321:1466 // ../rfc/5321:1494 // ../rfc/7208:490 -func deliverDSN(ctx context.Context, log mlog.Log, m Msg, remoteMTA dsn.NameIP, secodeOpt, errmsg string, smtpLines []string, permanent bool, retryUntil *time.Time, subject, textBody string) { +func deliverDSN(log mlog.Log, m Msg, remoteMTA dsn.NameIP, secodeOpt, errmsg string, smtpLines []string, permanent bool, retryUntil *time.Time, subject, textBody string) { kind := "delayed delivery" if permanent { kind = "failure" @@ -203,7 +308,7 @@ func deliverDSN(ctx context.Context, log mlog.Log, m Msg, remoteMTA dsn.NameIP, // ../rfc/3461:1329 var smtpDiag string if len(smtpLines) > 0 { - smtpDiag = "smtp; " + strings.Join(smtpLines, " ") + smtpDiag = strings.Join(smtpLines, " ") } dsnMsg := &dsn.Message{ @@ -221,14 +326,14 @@ func deliverDSN(ctx context.Context, log mlog.Log, m Msg, remoteMTA dsn.NameIP, Recipients: []dsn.Recipient{ { - FinalRecipient: m.Recipient(), - Action: action, - Status: status, - StatusComment: errmsg, - RemoteMTA: remoteMTA, - DiagnosticCode: smtpDiag, - LastAttemptDate: *m.LastAttempt, - WillRetryUntil: retryUntil, + FinalRecipient: m.Recipient(), + Action: action, + Status: status, + StatusComment: errmsg, + RemoteMTA: remoteMTA, + DiagnosticCodeSMTP: smtpDiag, + LastAttemptDate: *m.LastAttempt, + WillRetryUntil: retryUntil, }, }, diff --git a/queue/hook.go b/queue/hook.go new file mode 100644 index 0000000..3753bf0 --- /dev/null +++ b/queue/hook.go @@ -0,0 +1,1240 @@ +package queue + +import ( + "context" + "encoding/json" + "fmt" + "io" + "log/slog" + "net/http" + "net/textproto" + "runtime/debug" + "slices" + "strconv" + "strings" + "time" + + "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/client_golang/prometheus/promauto" + + "github.com/mjl-/bstore" + + "github.com/mjl-/mox/dns" + "github.com/mjl-/mox/dsn" + "github.com/mjl-/mox/message" + "github.com/mjl-/mox/metrics" + "github.com/mjl-/mox/mlog" + "github.com/mjl-/mox/mox-" + "github.com/mjl-/mox/moxvar" + "github.com/mjl-/mox/smtp" + "github.com/mjl-/mox/store" + "github.com/mjl-/mox/webhook" + "github.com/mjl-/mox/webops" +) + +var ( + metricHookRequest = promauto.NewHistogram( + prometheus.HistogramOpts{ + Name: "mox_webhook_request_duration_seconds", + Help: "HTTP webhook call duration.", + Buckets: []float64{0.01, 0.05, 0.1, 0.5, 1, 5, 10, 20, 30}, + }, + ) + metricHookResult = promauto.NewCounterVec( + prometheus.CounterOpts{ + Name: "mox_webhook_results_total", + Help: "HTTP webhook call results.", + }, + []string{"code"}, // Known http status codes (e.g. "404"), or "xx" for unknown http status codes, or "error". + ) +) + +// Hook is a webhook call about a delivery. We'll try delivering with backoff until we succeed or fail. +type Hook struct { + ID int64 + QueueMsgID int64 `bstore:"index"` // Original queue Msg/MsgRetired ID. Zero for hooks for incoming messages. + FromID string // As generated by us and returned in webapi call. Can be empty, for incoming messages to our base address. + MessageID string // Of outgoing or incoming messages. Includes <>. + Subject string // Subject of original outgoing message, or of incoming message. + Extra map[string]string // From submitted message. + + Account string `bstore:"nonzero"` + URL string `bstore:"nonzero"` // Taken from config when webhook is scheduled. + Authorization string // Optional value for authorization header to include in HTTP request. + IsIncoming bool + OutgoingEvent string // Empty string if not outgoing. + Payload string // JSON data to be submitted. + + Submitted time.Time `bstore:"default now,index"` + Attempts int + NextAttempt time.Time `bstore:"nonzero,index"` // Index for fast scheduling. + Results []HookResult +} + +// HookResult is the result of a single attempt to deliver a webhook. +type HookResult struct { + Start time.Time + Duration time.Duration + URL string + Success bool + Code int // eg 200, 404, 500. 2xx implies success. + Error string + Response string // Max 512 bytes of HTTP response body. +} + +// for logging queueing or starting delivery of a hook. +func (h Hook) attrs() []slog.Attr { + event := string(h.OutgoingEvent) + if h.IsIncoming { + event = "incoming" + } + return []slog.Attr{ + slog.Int64("webhookid", h.ID), + slog.Int("attempts", h.Attempts), + slog.Int64("msgid", h.QueueMsgID), + slog.String("account", h.Account), + slog.String("url", h.URL), + slog.String("fromid", h.FromID), + slog.String("messageid", h.MessageID), + slog.String("event", event), + slog.Time("nextattempt", h.NextAttempt), + } +} + +// LastResult returns the last result entry, or an empty result. +func (h Hook) LastResult() HookResult { + if len(h.Results) == 0 { + return HookResult{} + } + return h.Results[len(h.Results)-1] +} + +// Retired returns a HookRetired for a Hook, for insertion into the database. +func (h Hook) Retired(success bool, lastActivity, keepUntil time.Time) HookRetired { + return HookRetired{ + ID: h.ID, + QueueMsgID: h.QueueMsgID, + FromID: h.FromID, + MessageID: h.MessageID, + Subject: h.Subject, + Extra: h.Extra, + Account: h.Account, + URL: h.URL, + Authorization: h.Authorization != "", + IsIncoming: h.IsIncoming, + OutgoingEvent: h.OutgoingEvent, + Payload: h.Payload, + Submitted: h.Submitted, + Attempts: h.Attempts, + Results: h.Results, + Success: success, + LastActivity: lastActivity, + KeepUntil: keepUntil, + } +} + +// HookRetired is a Hook that was delivered/failed/canceled and kept according +// to the configuration. +type HookRetired struct { + ID int64 // Same as original Hook.ID. + QueueMsgID int64 // Original queue Msg or MsgRetired ID. Zero for hooks for incoming messages. + FromID string // As generated by us and returned in webapi call. Can be empty, for incoming messages to our base address. + MessageID string // Of outgoing or incoming messages. Includes <>. + Subject string // Subject of original outgoing message, or of incoming message. + Extra map[string]string // From submitted message. + + Account string `bstore:"nonzero,index Account+LastActivity"` + URL string `bstore:"nonzero"` // Taken from config at start of each attempt. + Authorization bool // Whether request had authorization without keeping it around. + IsIncoming bool + OutgoingEvent string + Payload string // JSON data submitted. + + Submitted time.Time + SupersededByID int64 // If not 0, a Hook.ID that superseded this one and Done will be true. + Attempts int + Results []HookResult + + Success bool + LastActivity time.Time `bstore:"index"` + KeepUntil time.Time `bstore:"index"` +} + +// LastResult returns the last result entry, or an empty result. +func (h HookRetired) LastResult() HookResult { + if len(h.Results) == 0 { + return HookResult{} + } + return h.Results[len(h.Results)-1] +} + +func cleanupHookRetired(done chan struct{}) { + log := mlog.New("queue", nil) + + defer func() { + x := recover() + if x != nil { + log.Error("unhandled panic while cleaning up retired webhooks", slog.Any("x", x)) + debug.PrintStack() + metrics.PanicInc(metrics.Queue) + } + }() + + timer := time.NewTimer(4 * time.Second) + for { + select { + case <-mox.Shutdown.Done(): + done <- struct{}{} + return + case <-timer.C: + } + + cleanupHookRetiredSingle(log) + timer.Reset(time.Hour) + } +} + +func cleanupHookRetiredSingle(log mlog.Log) { + n, err := bstore.QueryDB[HookRetired](mox.Shutdown, DB).FilterLess("KeepUntil", time.Now()).Delete() + log.Check(err, "removing old retired webhooks") + if n > 0 { + log.Debug("cleaned up retired webhooks", slog.Int("count", n)) + } +} + +func hookRetiredKeep(account string) time.Duration { + keep := 24 * 7 * time.Hour + if account != "" { + accConf, ok := mox.Conf.Account(account) + if ok { + keep = accConf.KeepRetiredWebhookPeriod + } + } + return keep +} + +// HookFilter filters messages to list or operate on. Used by admin web interface +// and cli. +// +// Only non-empty/non-zero values are applied to the filter. Leaving all fields +// empty/zero matches all hooks. +type HookFilter struct { + Max int + IDs []int64 + Account string + Submitted string // Whether submitted before/after a time relative to now. ">$duration" or "<$duration", also with "now" for duration. + NextAttempt string // ">$duration" or "<$duration", also with "now" for duration. + Event string // Including "incoming". +} + +func (f HookFilter) apply(q *bstore.Query[Hook]) error { + if len(f.IDs) > 0 { + q.FilterIDs(f.IDs) + } + applyTime := func(field string, s string) error { + orig := s + var less bool + if strings.HasPrefix(s, "<") { + less = true + } else if !strings.HasPrefix(s, ">") { + return fmt.Errorf(`must start with "<" for less or ">" for greater than a duration ago`) + } + s = strings.TrimSpace(s[1:]) + var t time.Time + if s == "now" { + t = time.Now() + } else if d, err := time.ParseDuration(s); err != nil { + return fmt.Errorf("parsing duration %q: %v", orig, err) + } else { + t = time.Now().Add(d) + } + if less { + q.FilterLess(field, t) + } else { + q.FilterGreater(field, t) + } + return nil + } + if f.Submitted != "" { + if err := applyTime("Submitted", f.Submitted); err != nil { + return fmt.Errorf("applying filter for submitted: %v", err) + } + } + if f.NextAttempt != "" { + if err := applyTime("NextAttempt", f.NextAttempt); err != nil { + return fmt.Errorf("applying filter for next attempt: %v", err) + } + } + if f.Account != "" { + q.FilterNonzero(Hook{Account: f.Account}) + } + if f.Event != "" { + if f.Event == "incoming" { + q.FilterNonzero(Hook{IsIncoming: true}) + } else { + q.FilterNonzero(Hook{OutgoingEvent: f.Event}) + } + } + if f.Max != 0 { + q.Limit(f.Max) + } + return nil +} + +type HookSort struct { + Field string // "Queued" or "NextAttempt"/"". + LastID int64 // If > 0, we return objects beyond this, less/greater depending on Asc. + Last any // Value of Field for last object. Must be set iff LastID is set. + Asc bool // Ascending, or descending. +} + +func (s HookSort) apply(q *bstore.Query[Hook]) error { + switch s.Field { + case "", "NextAttempt": + s.Field = "NextAttempt" + case "Submitted": + s.Field = "Submitted" + default: + return fmt.Errorf("unknown sort order field %q", s.Field) + } + + if s.LastID > 0 { + ls, ok := s.Last.(string) + if !ok { + return fmt.Errorf("last should be string with time, not %T %q", s.Last, s.Last) + } + last, err := time.Parse(time.RFC3339Nano, ls) + if err != nil { + last, err = time.Parse(time.RFC3339, ls) + } + if err != nil { + return fmt.Errorf("parsing last %q as time: %v", s.Last, err) + } + q.FilterNotEqual("ID", s.LastID) + var fieldEqual func(h Hook) bool + if s.Field == "NextAttempt" { + fieldEqual = func(h Hook) bool { return h.NextAttempt == last } + } else { + fieldEqual = func(h Hook) bool { return h.Submitted == last } + } + if s.Asc { + q.FilterGreaterEqual(s.Field, last) + q.FilterFn(func(h Hook) bool { + return !fieldEqual(h) || h.ID > s.LastID + }) + } else { + q.FilterLessEqual(s.Field, last) + q.FilterFn(func(h Hook) bool { + return !fieldEqual(h) || h.ID < s.LastID + }) + } + } + if s.Asc { + q.SortAsc(s.Field, "ID") + } else { + q.SortDesc(s.Field, "ID") + } + return nil +} + +// HookQueueSize returns the number of webhooks in the queue. +func HookQueueSize(ctx context.Context) (int, error) { + return bstore.QueryDB[Hook](ctx, DB).Count() +} + +// HookList returns webhooks according to filter and sort. +func HookList(ctx context.Context, filter HookFilter, sort HookSort) ([]Hook, error) { + q := bstore.QueryDB[Hook](ctx, DB) + if err := filter.apply(q); err != nil { + return nil, err + } + if err := sort.apply(q); err != nil { + return nil, err + } + return q.List() +} + +// HookRetiredFilter filters messages to list or operate on. Used by admin web interface +// and cli. +// +// Only non-empty/non-zero values are applied to the filter. Leaving all fields +// empty/zero matches all hooks. +type HookRetiredFilter struct { + Max int + IDs []int64 + Account string + Submitted string // Whether submitted before/after a time relative to now. ">$duration" or "<$duration", also with "now" for duration. + LastActivity string // ">$duration" or "<$duration", also with "now" for duration. + Event string // Including "incoming". +} + +func (f HookRetiredFilter) apply(q *bstore.Query[HookRetired]) error { + if len(f.IDs) > 0 { + q.FilterIDs(f.IDs) + } + applyTime := func(field string, s string) error { + orig := s + var less bool + if strings.HasPrefix(s, "<") { + less = true + } else if !strings.HasPrefix(s, ">") { + return fmt.Errorf(`must start with "<" for before or ">" for after a duration`) + } + s = strings.TrimSpace(s[1:]) + var t time.Time + if s == "now" { + t = time.Now() + } else if d, err := time.ParseDuration(s); err != nil { + return fmt.Errorf("parsing duration %q: %v", orig, err) + } else { + t = time.Now().Add(d) + } + if less { + q.FilterLess(field, t) + } else { + q.FilterGreater(field, t) + } + return nil + } + if f.Submitted != "" { + if err := applyTime("Submitted", f.Submitted); err != nil { + return fmt.Errorf("applying filter for submitted: %v", err) + } + } + if f.LastActivity != "" { + if err := applyTime("LastActivity", f.LastActivity); err != nil { + return fmt.Errorf("applying filter for last activity: %v", err) + } + } + if f.Account != "" { + q.FilterNonzero(HookRetired{Account: f.Account}) + } + if f.Event != "" { + if f.Event == "incoming" { + q.FilterNonzero(HookRetired{IsIncoming: true}) + } else { + q.FilterNonzero(HookRetired{OutgoingEvent: f.Event}) + } + } + if f.Max != 0 { + q.Limit(f.Max) + } + return nil +} + +type HookRetiredSort struct { + Field string // "Queued" or "LastActivity"/"". + LastID int64 // If > 0, we return objects beyond this, less/greater depending on Asc. + Last any // Value of Field for last object. Must be set iff LastID is set. + Asc bool // Ascending, or descending. +} + +func (s HookRetiredSort) apply(q *bstore.Query[HookRetired]) error { + switch s.Field { + case "", "LastActivity": + s.Field = "LastActivity" + case "Submitted": + s.Field = "Submitted" + default: + return fmt.Errorf("unknown sort order field %q", s.Field) + } + + if s.LastID > 0 { + ls, ok := s.Last.(string) + if !ok { + return fmt.Errorf("last should be string with time, not %T %q", s.Last, s.Last) + } + last, err := time.Parse(time.RFC3339Nano, ls) + if err != nil { + last, err = time.Parse(time.RFC3339, ls) + } + if err != nil { + return fmt.Errorf("parsing last %q as time: %v", s.Last, err) + } + q.FilterNotEqual("ID", s.LastID) + var fieldEqual func(hr HookRetired) bool + if s.Field == "LastActivity" { + fieldEqual = func(hr HookRetired) bool { return hr.LastActivity == last } + } else { + fieldEqual = func(hr HookRetired) bool { return hr.Submitted == last } + } + if s.Asc { + q.FilterGreaterEqual(s.Field, last) + q.FilterFn(func(hr HookRetired) bool { + return !fieldEqual(hr) || hr.ID > s.LastID + }) + } else { + q.FilterLessEqual(s.Field, last) + q.FilterFn(func(hr HookRetired) bool { + return !fieldEqual(hr) || hr.ID < s.LastID + }) + } + } + if s.Asc { + q.SortAsc(s.Field, "ID") + } else { + q.SortDesc(s.Field, "ID") + } + return nil +} + +// HookRetiredList returns retired webhooks according to filter and sort. +func HookRetiredList(ctx context.Context, filter HookRetiredFilter, sort HookRetiredSort) ([]HookRetired, error) { + q := bstore.QueryDB[HookRetired](ctx, DB) + if err := filter.apply(q); err != nil { + return nil, err + } + if err := sort.apply(q); err != nil { + return nil, err + } + return q.List() +} + +// HookNextAttemptAdd adds a duration to the NextAttempt for all matching messages, and +// kicks the queue. +func HookNextAttemptAdd(ctx context.Context, filter HookFilter, d time.Duration) (affected int, err error) { + err = DB.Write(ctx, func(tx *bstore.Tx) error { + q := bstore.QueryTx[Hook](tx) + if err := filter.apply(q); err != nil { + return err + } + hooks, err := q.List() + if err != nil { + return fmt.Errorf("listing matching hooks: %v", err) + } + for _, h := range hooks { + h.NextAttempt = h.NextAttempt.Add(d) + if err := tx.Update(&h); err != nil { + return err + } + } + affected = len(hooks) + return nil + }) + if err != nil { + return 0, err + } + hookqueueKick() + return affected, nil +} + +// HookNextAttemptSet sets NextAttempt for all matching messages to a new absolute +// time and kicks the queue. +func HookNextAttemptSet(ctx context.Context, filter HookFilter, t time.Time) (affected int, err error) { + q := bstore.QueryDB[Hook](ctx, DB) + if err := filter.apply(q); err != nil { + return 0, err + } + n, err := q.UpdateNonzero(Hook{NextAttempt: t}) + if err != nil { + return 0, fmt.Errorf("selecting and updating hooks in queue: %v", err) + } + hookqueueKick() + return n, nil +} + +// HookCancel prevents more delivery attempts of the hook, moving it to the +// retired list if configured. +func HookCancel(ctx context.Context, log mlog.Log, filter HookFilter) (affected int, err error) { + var hooks []Hook + err = DB.Write(ctx, func(tx *bstore.Tx) error { + q := bstore.QueryTx[Hook](tx) + if err := filter.apply(q); err != nil { + return err + } + q.Gather(&hooks) + n, err := q.Delete() + if err != nil { + return fmt.Errorf("selecting and deleting hooks from queue: %v", err) + } + + if len(hooks) == 0 { + return nil + } + + now := time.Now() + for _, h := range hooks { + keep := hookRetiredKeep(h.Account) + if keep > 0 { + hr := h.Retired(false, now, now.Add(keep)) + hr.Results = append(hr.Results, HookResult{Start: now, Error: "canceled by admin"}) + if err := tx.Insert(&hr); err != nil { + return fmt.Errorf("inserting retired hook: %v", err) + } + } + } + + affected = n + return nil + }) + if err != nil { + return 0, err + } + for _, h := range hooks { + log.Info("canceled hook", h.attrs()...) + } + hookqueueKick() + return affected, nil +} + +func hookCompose(m Msg, url, authz string, event webhook.OutgoingEvent, suppressing bool, code int, secodeOpt string) (Hook, error) { + now := time.Now() + + var lastError string + if len(m.Results) > 0 { + lastError = m.Results[len(m.Results)-1].Error + } + var ecode string + if secodeOpt != "" { + ecode = fmt.Sprintf("%d.%s", code/100, secodeOpt) + } + data := webhook.Outgoing{ + Event: event, + Suppressing: suppressing, + QueueMsgID: m.ID, + FromID: m.FromID, + MessageID: m.MessageID, + Subject: m.Subject, + WebhookQueued: now, + Error: lastError, + SMTPCode: code, + SMTPEnhancedCode: ecode, + Extra: m.Extra, + } + if data.Extra == nil { + data.Extra = map[string]string{} + } + payload, err := json.Marshal(data) + if err != nil { + return Hook{}, fmt.Errorf("marshal webhook payload: %v", err) + } + + h := Hook{ + QueueMsgID: m.ID, + FromID: m.FromID, + MessageID: m.MessageID, + Subject: m.Subject, + Extra: m.Extra, + Account: m.SenderAccount, + URL: url, + Authorization: authz, + IsIncoming: false, + OutgoingEvent: string(event), + Payload: string(payload), + Submitted: now, + NextAttempt: now, + } + return h, nil +} + +// Incoming processes a message delivered over SMTP for webhooks. If the message is +// a DSN, a webhook for outgoing deliveries may be scheduled (if configured). +// Otherwise, a webhook for incoming deliveries may be scheduled. +func Incoming(ctx context.Context, log mlog.Log, acc *store.Account, messageID string, m store.Message, part message.Part, mailboxName string) error { + now := time.Now() + var data any + + log = log.With( + slog.Int64("msgid", m.ID), + slog.String("messageid", messageID), + slog.String("mailbox", mailboxName), + ) + + // todo future: if there is no fromid in our rcpt address, but this is a 3-part dsn with headers that includes message-id, try matching based on that. + // todo future: once we implement the SMTP DSN extension, use ENVID when sending (if destination implements it), and start looking for Original-Envelope-ID in the DSN. + + // If this is a DSN for a message we sent, don't deliver a hook for incoming + // message, but an outgoing status webhook. + var fromID string + dom, err := dns.ParseDomain(m.RcptToDomain) + if err != nil { + log.Debugx("parsing recipient domain in incoming message", err) + } else { + domconf, _ := mox.Conf.Domain(dom) + if domconf.LocalpartCatchallSeparator != "" { + t := strings.SplitN(string(m.RcptToLocalpart), domconf.LocalpartCatchallSeparator, 2) + if len(t) == 2 { + fromID = t[1] + } + } + } + var outgoingEvent webhook.OutgoingEvent + var queueMsgID int64 + var subject string + if fromID != "" { + err := DB.Write(ctx, func(tx *bstore.Tx) (rerr error) { + mr, err := bstore.QueryTx[MsgRetired](tx).FilterNonzero(MsgRetired{FromID: fromID}).Get() + if err == bstore.ErrAbsent { + log.Debug("no original message found for fromid", slog.String("fromid", fromID)) + return nil + } else if err != nil { + return fmt.Errorf("looking up original message for fromid: %v", err) + } + + queueMsgID = mr.ID + subject = mr.Subject + + log = log.With(slog.String("fromid", fromID)) + log.Debug("processing incoming message about previous delivery for webhooks") + + // We'll record this message in the results. + mr.LastActivity = now + mr.Results = append(mr.Results, MsgResult{Start: now, Error: "incoming message"}) + result := &mr.Results[len(mr.Results)-1] // Updated below. + + outgoingEvent = webhook.EventUnrecognized + var suppressedMsgIDs []int64 + var isDSN bool + var code int + var secode string + defer func() { + if rerr == nil { + var ecode string + if secode != "" { + ecode = fmt.Sprintf("%d.%s", code/100, secode) + } + data = webhook.Outgoing{ + Event: outgoingEvent, + DSN: isDSN, + Suppressing: len(suppressedMsgIDs) > 0, + QueueMsgID: mr.ID, + FromID: fromID, + MessageID: mr.MessageID, + Subject: mr.Subject, + WebhookQueued: now, + SMTPCode: code, + SMTPEnhancedCode: ecode, + Extra: mr.Extra, + } + + if err := tx.Update(&mr); err != nil { + rerr = fmt.Errorf("updating retired message after processing: %v", err) + return + } + } + }() + + if !(part.MediaType == "MULTIPART" && part.MediaSubType == "REPORT" && len(part.Parts) >= 2 && part.Parts[1].MediaType == "MESSAGE" && (part.Parts[1].MediaSubType == "DELIVERY-STATUS" || part.Parts[1].MediaSubType == "GLOBAL-DELIVERY-STATUS")) { + // Some kind of delivery-related event, but we don't recognize it. + result.Error = "incoming message not a dsn" + return nil + } + isDSN = true + dsnutf8 := part.Parts[1].MediaSubType == "GLOBAL-DELIVERY-STATUS" + dsnmsg, err := dsn.Decode(part.Parts[1].ReaderUTF8OrBinary(), dsnutf8) + if err != nil { + log.Infox("parsing dsn message for webhook", err) + result.Error = fmt.Sprintf("parsing incoming dsn: %v", err) + return nil + } else if len(dsnmsg.Recipients) != 1 { + log.Info("dsn message for webhook does not have exactly one dsn recipient", slog.Int("nrecipients", len(dsnmsg.Recipients))) + result.Error = fmt.Sprintf("incoming dsn has %d recipients, expecting 1", len(dsnmsg.Recipients)) + return nil + } + + dsnrcpt := dsnmsg.Recipients[0] + + if dsnrcpt.DiagnosticCodeSMTP != "" { + code, secode = parseSMTPCodes(dsnrcpt.DiagnosticCodeSMTP) + } + if code == 0 && dsnrcpt.Status != "" { + if strings.HasPrefix(dsnrcpt.Status, "4.") { + code = 400 + secode = dsnrcpt.Status[2:] + } else if strings.HasPrefix(dsnrcpt.Status, "5.") { + code = 500 + secode = dsnrcpt.Status[2:] + } + } + result.Code = code + result.Secode = secode + log.Debug("incoming dsn message", slog.String("action", string(dsnrcpt.Action)), slog.Int("dsncode", code), slog.String("dsnsecode", secode)) + + switch s := dsnrcpt.Action; s { + case dsn.Failed: + outgoingEvent = webhook.EventFailed + + if code != 0 { + sc := suppressionCheck{ + MsgID: mr.ID, + Account: acc.Name, + Recipient: mr.Recipient(), + Code: code, + Secode: secode, + Source: "DSN", + } + suppressedMsgIDs, err = suppressionProcess(log, tx, sc) + if err != nil { + return fmt.Errorf("processing dsn for suppression list: %v", err) + } + } else { + log.Debug("no code/secode in dsn for failed delivery", slog.Int64("msgid", mr.ID)) + } + + case dsn.Delayed, dsn.Delivered, dsn.Relayed, dsn.Expanded: + outgoingEvent = webhook.OutgoingEvent(string(s)) + result.Success = s != dsn.Delayed + + default: + log.Info("unrecognized dsn action", slog.String("action", string(dsnrcpt.Action))) + } + return nil + }) + if err != nil { + return fmt.Errorf("processing message based on fromid: %v", err) + } + } + + accConf, _ := acc.Conf() + + var hookURL, authz string + var isIncoming bool + if data == nil { + if accConf.IncomingWebhook == nil { + return nil + } + hookURL = accConf.IncomingWebhook.URL + authz = accConf.IncomingWebhook.Authorization + + log.Debug("composing webhook for incoming message") + + isIncoming = true + var rcptTo string + if m.RcptToDomain != "" { + rcptTo = m.RcptToLocalpart.String() + "@" + m.RcptToDomain + } + in := webhook.Incoming{ + Structure: webhook.PartStructure(&part), + Meta: webhook.IncomingMeta{ + MsgID: m.ID, + MailFrom: m.MailFrom, + MailFromValidated: m.MailFromValidated, + MsgFromValidated: m.MsgFromValidated, + RcptTo: rcptTo, + DKIMVerifiedDomains: m.DKIMDomains, + RemoteIP: m.RemoteIP, + Received: m.Received, + MailboxName: mailboxName, + }, + } + if in.Meta.DKIMVerifiedDomains == nil { + in.Meta.DKIMVerifiedDomains = []string{} + } + if env := part.Envelope; env != nil { + subject = env.Subject + in.From = addresses(env.From) + in.To = addresses(env.To) + in.CC = addresses(env.CC) + in.BCC = addresses(env.BCC) + in.ReplyTo = addresses(env.ReplyTo) + in.Subject = env.Subject + in.MessageID = env.MessageID + in.InReplyTo = env.InReplyTo + if !env.Date.IsZero() { + in.Date = &env.Date + } + } + // todo: ideally, we would have this information available in parsed Part, not require parsing headers here. + h, err := part.Header() + if err != nil { + log.Debugx("parsing headers of incoming message", err, slog.Int64("msgid", m.ID)) + } else { + refs, err := message.ReferencedIDs(h.Values("References"), nil) + if err != nil { + log.Debugx("parsing references header", err, slog.Int64("msgid", m.ID)) + } + for i, r := range refs { + refs[i] = "<" + r + ">" + } + if refs == nil { + refs = []string{} + } + in.References = refs + + // Check if message is automated. Empty SMTP MAIL FROM indicates this was some kind + // of service message. Several headers indicate out-of-office replies, messages + // from mailing or marketing lists. And the content-type can indicate a report + // (e.g. DSN/MDN). + in.Meta.Automated = m.MailFrom == "" || isAutomated(h) || part.MediaType == "MULTIPART" && part.MediaSubType == "REPORT" + } + + text, html, _, err := webops.ReadableParts(part, 1*1024*1024) + if err != nil { + log.Debugx("looking for text and html content in message", err) + } + in.Text = strings.ReplaceAll(text, "\r\n", "\n") + in.HTML = strings.ReplaceAll(html, "\r\n", "\n") + + data = in + } else if accConf.OutgoingWebhook == nil { + return nil + } else if len(accConf.OutgoingWebhook.Events) == 0 || slices.Contains(accConf.OutgoingWebhook.Events, string(outgoingEvent)) { + hookURL = accConf.OutgoingWebhook.URL + authz = accConf.OutgoingWebhook.Authorization + } else { + log.Debug("not sending webhook, account not subscribed for event", slog.String("event", string(outgoingEvent))) + return nil + } + + payload, err := json.Marshal(data) + if err != nil { + return fmt.Errorf("marshal webhook payload: %v", err) + } + + h := Hook{ + QueueMsgID: queueMsgID, + FromID: fromID, + MessageID: messageID, + Subject: subject, + Account: acc.Name, + URL: hookURL, + Authorization: authz, + IsIncoming: isIncoming, + OutgoingEvent: string(outgoingEvent), + Payload: string(payload), + Submitted: now, + NextAttempt: now, + } + err = DB.Write(ctx, func(tx *bstore.Tx) error { + if err := hookInsert(tx, &h, now, accConf.KeepRetiredWebhookPeriod); err != nil { + return fmt.Errorf("queueing webhook for incoming message: %v", err) + } + return nil + }) + if err != nil { + return fmt.Errorf("inserting webhook in database: %v", err) + } + log.Debug("queued webhook for incoming message", h.attrs()...) + hookqueueKick() + return nil +} + +func isAutomated(h textproto.MIMEHeader) bool { + l := []string{"List-Id", "List-Unsubscribe", "List-Unsubscribe-Post", "Precedence"} + for _, k := range l { + if h.Get(k) != "" { + return true + } + } + if s := strings.TrimSpace(h.Get("Auto-Submitted")); s != "" && !strings.EqualFold(s, "no") { + return true + } + return false +} + +func parseSMTPCodes(line string) (code int, secode string) { + t := strings.SplitN(line, " ", 3) + if len(t) <= 1 || len(t[0]) != 3 { + return 0, "" + } + v, err := strconv.ParseUint(t[0], 10, 64) + if err != nil || code >= 600 { + return 0, "" + } + if len(t) >= 2 && (strings.HasPrefix(t[1], "4.") || strings.HasPrefix(t[1], "5.")) { + secode = t[1][2:] + } + return int(v), secode +} + +// Insert hook into database, but first retire any existing pending hook for +// QueueMsgID if it is > 0. +func hookInsert(tx *bstore.Tx, h *Hook, now time.Time, accountKeepPeriod time.Duration) error { + if err := tx.Insert(h); err != nil { + return fmt.Errorf("insert webhook: %v", err) + } + if h.QueueMsgID == 0 { + return nil + } + + // Find existing queued hook for previously msgid from queue. Can be at most one. + oh, err := bstore.QueryTx[Hook](tx).FilterNonzero(Hook{QueueMsgID: h.QueueMsgID}).FilterNotEqual("ID", h.ID).Get() + if err == bstore.ErrAbsent { + return nil + } else if err != nil { + return fmt.Errorf("get existing webhook before inserting new hook for same queuemsgid %d", h.QueueMsgID) + } + + // Retire this queued hook. + // This hook may be in the process of being delivered. When delivered, we'll try to + // move it from Hook to HookRetired. But that will fail since Hook is already + // retired. We detect that situation and update the retired hook with the new + // (final) result. + if accountKeepPeriod > 0 { + hr := oh.Retired(false, now, now.Add(accountKeepPeriod)) + hr.SupersededByID = h.ID + if err := tx.Insert(&hr); err != nil { + return fmt.Errorf("inserting superseded webhook as retired hook: %v", err) + } + } + if err := tx.Delete(&oh); err != nil { + return fmt.Errorf("deleting superseded webhook: %v", err) + } + return nil +} + +func addresses(al []message.Address) []webhook.NameAddress { + l := make([]webhook.NameAddress, len(al)) + for i, a := range al { + addr := a.User + "@" + a.Host + pa, err := smtp.ParseAddress(addr) + if err == nil { + addr = pa.Pack(true) + } + l[i] = webhook.NameAddress{ + Name: a.Name, + Address: addr, + } + } + return l +} + +var ( + hookqueue = make(chan struct{}, 1) + hookDeliveryResults = make(chan string, 1) +) + +func hookqueueKick() { + select { + case hookqueue <- struct{}{}: + default: + } +} + +func startHookQueue(done chan struct{}) { + log := mlog.New("queue", nil) + busyHookURLs := map[string]struct{}{} + timer := time.NewTimer(0) + for { + select { + case <-mox.Shutdown.Done(): + done <- struct{}{} + return + case <-hookqueue: + case <-timer.C: + case url := <-hookDeliveryResults: + delete(busyHookURLs, url) + } + + if len(busyHookURLs) >= maxConcurrentHookDeliveries { + continue + } + + hookLaunchWork(log, busyHookURLs) + timer.Reset(hookNextWork(mox.Shutdown, log, busyHookURLs)) + } +} + +func hookNextWork(ctx context.Context, log mlog.Log, busyURLs map[string]struct{}) time.Duration { + q := bstore.QueryDB[Hook](ctx, DB) + if len(busyURLs) > 0 { + var urls []any + for u := range busyURLs { + urls = append(urls, u) + } + q.FilterNotEqual("URL", urls...) + } + q.SortAsc("NextAttempt") + q.Limit(1) + h, err := q.Get() + if err == bstore.ErrAbsent { + return 24 * time.Hour + } else if err != nil { + log.Errorx("finding time for next webhook delivery attempt", err) + return 1 * time.Minute + } + return time.Until(h.NextAttempt) +} + +func hookLaunchWork(log mlog.Log, busyURLs map[string]struct{}) int { + q := bstore.QueryDB[Hook](mox.Shutdown, DB) + q.FilterLessEqual("NextAttempt", time.Now()) + q.SortAsc("NextAttempt") + q.Limit(maxConcurrentHookDeliveries) + if len(busyURLs) > 0 { + var urls []any + for u := range busyURLs { + urls = append(urls, u) + } + q.FilterNotEqual("URL", urls...) + } + var hooks []Hook + seen := map[string]bool{} + err := q.ForEach(func(h Hook) error { + u := h.URL + if _, ok := busyURLs[u]; !ok && !seen[u] { + seen[u] = true + hooks = append(hooks, h) + } + return nil + }) + if err != nil { + log.Errorx("querying for work in webhook queue", err) + mox.Sleep(mox.Shutdown, 1*time.Second) + return -1 + } + + for _, h := range hooks { + busyURLs[h.URL] = struct{}{} + go hookDeliver(log, h) + } + return len(hooks) +} + +var hookIntervals []time.Duration + +func init() { + const M = time.Minute + const H = time.Hour + hookIntervals = []time.Duration{M, 2 * M, 4 * M, 15 * M / 2, 15 * M, 30 * M, 1 * H, 2 * H, 4 * H, 8 * H, 16 * H} +} + +func hookDeliver(log mlog.Log, h Hook) { + ctx := mox.Shutdown + + qlog := log.WithCid(mox.Cid()) + qlog.Debug("attempting to deliver webhook", h.attrs()...) + qlog = qlog.With(slog.Int64("webhookid", h.ID)) + + defer func() { + hookDeliveryResults <- h.URL + + x := recover() + if x != nil { + qlog.Error("webhook deliver panic", slog.Any("panic", x)) + debug.PrintStack() + metrics.PanicInc(metrics.Queue) + } + }() + + // todo: should we get a new webhook url from the config before attempting? would intervene with our "urls busy" approach. may not be worth it. + + // Set Attempts & NextAttempt early. In case of failures while processing, at least + // we won't try again immediately. We do backoff at intervals: + var backoff time.Duration + if h.Attempts < len(hookIntervals) { + backoff = hookIntervals[h.Attempts] + } else { + backoff = hookIntervals[len(hookIntervals)-1] * time.Duration(2) + } + backoff += time.Duration(jitter.Intn(200)-100) * backoff / 10000 + h.Attempts++ + now := time.Now() + h.NextAttempt = now.Add(backoff) + h.Results = append(h.Results, HookResult{Start: now, URL: h.URL, Error: resultErrorDelivering}) + result := &h.Results[len(h.Results)-1] + if err := DB.Update(mox.Shutdown, &h); err != nil { + qlog.Errorx("storing webhook delivery attempt", err) + return + } + + hctx, cancel := context.WithTimeout(ctx, 60*time.Second) + defer cancel() + t0 := time.Now() + code, response, err := HookPost(hctx, qlog, h.ID, h.Attempts, h.URL, h.Authorization, h.Payload) + result.Duration = time.Since(t0) + result.Success = err == nil + result.Code = code + result.Error = "" + result.Response = response + if err != nil { + result.Error = err.Error() + } + if err != nil && h.Attempts <= len(hookIntervals) { + // We'll try again later, so only update existing record. + qlog.Debugx("webhook delivery failed, will try again later", err) + xerr := DB.Write(context.Background(), func(tx *bstore.Tx) error { + if err := tx.Update(&h); err == bstore.ErrAbsent { + return updateRetiredHook(tx, h, result) + } else if err != nil { + return fmt.Errorf("update webhook after retryable failure: %v", err) + } + return nil + }) + qlog.Check(xerr, "updating failed webhook delivery attempt in database", slog.String("deliveryerr", err.Error())) + return + } + + qlog.Debugx("webhook delivery completed", err, slog.Bool("success", result.Success)) + + // Move Hook to HookRetired. + err = DB.Write(context.Background(), func(tx *bstore.Tx) error { + if err := tx.Delete(&h); err == bstore.ErrAbsent { + return updateRetiredHook(tx, h, result) + } else if err != nil { + return fmt.Errorf("removing webhook from database: %v", err) + } + keep := hookRetiredKeep(h.Account) + if keep > 0 { + hr := h.Retired(result.Success, t0, t0.Add(keep)) + if err := tx.Insert(&hr); err != nil { + return fmt.Errorf("inserting retired webhook in database: %v", err) + } + } + return nil + }) + qlog.Check(err, "moving delivered webhook from to retired hooks") +} + +func updateRetiredHook(tx *bstore.Tx, h Hook, result *HookResult) error { + // Hook is gone. It may have been superseded and moved to HookRetired while we were + // delivering it. If so, add the result to the retired hook. + hr := HookRetired{ID: h.ID} + if err := tx.Get(&hr); err != nil { + return fmt.Errorf("result for webhook that was no longer in webhook queue or retired webhooks: %v", err) + } + result.Error += "(superseded)" + hr.Results = append(hr.Results, *result) + if err := tx.Update(&hr); err != nil { + return fmt.Errorf("updating retired webhook after webhook was superseded during delivery: %v", err) + } + return nil +} + +var hookClient = &http.Client{Transport: hookTransport()} + +func hookTransport() *http.Transport { + t := http.DefaultTransport.(*http.Transport).Clone() + // Limit resources consumed during idle periods, probably most of the time. But + // during busy periods, we may use the few connections for many events. + t.IdleConnTimeout = 5 * time.Second + t.MaxIdleConnsPerHost = 2 + return t +} + +func HookPost(ctx context.Context, log mlog.Log, hookID int64, attempt int, url, authz string, payload string) (code int, response string, err error) { + req, err := http.NewRequestWithContext(ctx, "POST", url, strings.NewReader(payload)) + if err != nil { + return 0, "", fmt.Errorf("new request: %v", err) + } + req.Header.Set("User-Agent", fmt.Sprintf("mox/%s (webhook)", moxvar.Version)) + req.Header.Set("Content-Type", "application/json; charset=utf-8") + req.Header.Set("X-Mox-Webhook-ID", fmt.Sprintf("%d", hookID)) + req.Header.Set("X-Mox-Webhook-Attempt", fmt.Sprintf("%d", attempt)) + if authz != "" { + req.Header.Set("Authorization", authz) + } + t0 := time.Now() + resp, err := hookClient.Do(req) + metricHookRequest.Observe(float64(time.Since(t0)) / float64(time.Second)) + if err != nil { + metricHookResult.WithLabelValues("error").Inc() + log.Debugx("webhook http transaction", err) + return 0, "", fmt.Errorf("http transact: %v", err) + } + defer resp.Body.Close() + + // Use full http status code for known codes, and a generic "xx" for others. + result := fmt.Sprintf("%d", resp.StatusCode) + if http.StatusText(resp.StatusCode) == "" { + result = fmt.Sprintf("%dxx", resp.StatusCode/100) + } + metricHookResult.WithLabelValues(result).Inc() + log.Debug("webhook http post result", slog.Int("statuscode", resp.StatusCode), slog.Duration("duration", time.Since(t0))) + + respbuf, _ := io.ReadAll(io.LimitReader(resp.Body, 512)) + if resp.StatusCode != http.StatusOK { + err = fmt.Errorf("http status %q, expected 200 ok", resp.Status) + } + return resp.StatusCode, string(respbuf), err +} diff --git a/queue/hook_test.go b/queue/hook_test.go new file mode 100644 index 0000000..0efa0af --- /dev/null +++ b/queue/hook_test.go @@ -0,0 +1,688 @@ +package queue + +import ( + "bytes" + "encoding/json" + "fmt" + "net/http" + "net/http/httptest" + "slices" + "strings" + "testing" + "time" + + "github.com/mjl-/bstore" + + "github.com/mjl-/mox/dsn" + "github.com/mjl-/mox/message" + "github.com/mjl-/mox/smtp" + "github.com/mjl-/mox/store" + "github.com/mjl-/mox/webhook" +) + +// Test webhooks for incoming message that is not related to outgoing deliveries. +func TestHookIncoming(t *testing.T) { + acc, cleanup := setup(t) + defer cleanup() + err := Init() + tcheck(t, err, "queue init") + + accret, err := store.OpenAccount(pkglog, "retired") + tcheck(t, err, "open account for retired") + defer func() { + accret.Close() + accret.CheckClosed() + }() + + testIncoming := func(a *store.Account, expIn bool) { + t.Helper() + + _, err := bstore.QueryDB[Hook](ctxbg, DB).Delete() + tcheck(t, err, "clean up hooks") + + mr := bytes.NewReader([]byte(testmsg)) + now := time.Now().Round(0) + m := store.Message{ + ID: 123, + RemoteIP: "::1", + MailFrom: "sender@remote.example", + MailFromLocalpart: "sender", + MailFromDomain: "remote.example", + RcptToLocalpart: "rcpt", + RcptToDomain: "mox.example", + MsgFromLocalpart: "mjl", + MsgFromDomain: "mox.example", + MsgFromOrgDomain: "mox.example", + EHLOValidated: true, + MailFromValidated: true, + MsgFromValidated: true, + EHLOValidation: store.ValidationPass, + MailFromValidation: store.ValidationPass, + MsgFromValidation: store.ValidationDMARC, + DKIMDomains: []string{"remote.example"}, + Received: now, + Size: int64(len(testmsg)), + } + part, err := message.EnsurePart(pkglog.Logger, true, mr, int64(len(testmsg))) + tcheck(t, err, "parsing message") + + err = Incoming(ctxbg, pkglog, a, "", m, part, "Inbox") + tcheck(t, err, "pass incoming message") + + hl, err := bstore.QueryDB[Hook](ctxbg, DB).List() + tcheck(t, err, "list hooks") + if !expIn { + tcompare(t, len(hl), 0) + return + } + tcompare(t, len(hl), 1) + h := hl[0] + tcompare(t, h.IsIncoming, true) + var in webhook.Incoming + dec := json.NewDecoder(strings.NewReader(h.Payload)) + err = dec.Decode(&in) + tcheck(t, err, "decode incoming webhook") + + expIncoming := webhook.Incoming{ + From: []webhook.NameAddress{{Address: "mjl@mox.example"}}, + To: []webhook.NameAddress{{Address: "mjl@mox.example"}}, + CC: []webhook.NameAddress{}, + BCC: []webhook.NameAddress{}, + ReplyTo: []webhook.NameAddress{}, + References: []string{}, + Subject: "test", + Text: "test email\n", + + Structure: webhook.PartStructure(&part), + Meta: webhook.IncomingMeta{ + MsgID: m.ID, + MailFrom: m.MailFrom, + MailFromValidated: m.MailFromValidated, + MsgFromValidated: m.MsgFromValidated, + RcptTo: "rcpt@mox.example", + DKIMVerifiedDomains: []string{"remote.example"}, + RemoteIP: "::1", + Received: m.Received, + MailboxName: "Inbox", + Automated: false, + }, + } + tcompare(t, in, expIncoming) + } + + testIncoming(acc, false) + testIncoming(accret, true) +} + +// Test with fromid and various DSNs, and delivery. +func TestFromIDIncomingDelivery(t *testing.T) { + acc, cleanup := setup(t) + defer cleanup() + err := Init() + tcheck(t, err, "queue init") + + accret, err := store.OpenAccount(pkglog, "retired") + tcheck(t, err, "open account for retired") + defer func() { + accret.Close() + accret.CheckClosed() + }() + + // Account that only gets webhook calls, but no retired webhooks. + acchook, err := store.OpenAccount(pkglog, "hook") + tcheck(t, err, "open account for hook") + defer func() { + acchook.Close() + acchook.CheckClosed() + }() + + addr, err := smtp.ParseAddress("mjl@mox.example") + tcheck(t, err, "parse address") + path := addr.Path() + + now := time.Now().Round(0) + m := store.Message{ + ID: 123, + RemoteIP: "::1", + MailFrom: "sender@remote.example", + MailFromLocalpart: "sender", + MailFromDomain: "remote.example", + RcptToLocalpart: "rcpt", + RcptToDomain: "mox.example", + MsgFromLocalpart: "mjl", + MsgFromDomain: "mox.example", + MsgFromOrgDomain: "mox.example", + EHLOValidated: true, + MailFromValidated: true, + MsgFromValidated: true, + EHLOValidation: store.ValidationPass, + MailFromValidation: store.ValidationPass, + MsgFromValidation: store.ValidationDMARC, + DKIMDomains: []string{"remote.example"}, + Received: now, + DSN: true, + } + + testIncoming := func(a *store.Account, rawmsg []byte, retiredFromID string, expIn bool, expOut *webhook.Outgoing) { + t.Helper() + + _, err := bstore.QueryDB[Hook](ctxbg, DB).Delete() + tcheck(t, err, "clean up hooks") + _, err = bstore.QueryDB[MsgRetired](ctxbg, DB).Delete() + tcheck(t, err, "clean up retired messages") + + qmr := MsgRetired{ + SenderAccount: a.Name, + SenderLocalpart: "sender", + SenderDomainStr: "remote.example", + RecipientLocalpart: "rcpt", + RecipientDomain: path.IPDomain, + RecipientDomainStr: "mox.example", + RecipientAddress: "rcpt@mox.example", + Success: true, + KeepUntil: now.Add(time.Minute), + } + m.RcptToLocalpart = "mjl" + qmr.FromID = retiredFromID + m.Size = int64(len(rawmsg)) + m.RcptToLocalpart += smtp.Localpart("+unique") + + err = DB.Insert(ctxbg, &qmr) + tcheck(t, err, "insert retired message to match") + + if expOut != nil { + expOut.QueueMsgID = qmr.ID + } + + mr := bytes.NewReader(rawmsg) + part, err := message.EnsurePart(pkglog.Logger, true, mr, int64(len(rawmsg))) + tcheck(t, err, "parsing message") + + err = Incoming(ctxbg, pkglog, a, "", m, part, "Inbox") + tcheck(t, err, "pass incoming message") + + hl, err := bstore.QueryDB[Hook](ctxbg, DB).List() + tcheck(t, err, "list hooks") + if !expIn && expOut == nil { + tcompare(t, len(hl), 0) + return + } + tcompare(t, len(hl), 1) + h := hl[0] + tcompare(t, h.IsIncoming, expIn) + if expIn { + return + } + var out webhook.Outgoing + dec := json.NewDecoder(strings.NewReader(h.Payload)) + err = dec.Decode(&out) + tcheck(t, err, "decode outgoing webhook") + + out.WebhookQueued = time.Time{} + tcompare(t, &out, expOut) + } + + dsncompose := func(m *dsn.Message) []byte { + buf, err := m.Compose(pkglog, false) + tcheck(t, err, "compose dsn") + return buf + } + makedsn := func(action dsn.Action) *dsn.Message { + return &dsn.Message{ + From: path, + To: path, + TextBody: "explanation", + MessageID: "", + ReportingMTA: "localhost", + Recipients: []dsn.Recipient{ + { + FinalRecipient: path, + Action: action, + Status: "5.0.0.", + DiagnosticCodeSMTP: "554 5.0.0 error", + }, + }, + } + } + + msgfailed := dsncompose(makedsn(dsn.Failed)) + + // No FromID to match against, so we get a webhook for a new incoming message. + testIncoming(acc, msgfailed, "", false, nil) + testIncoming(accret, msgfailed, "mismatch", true, nil) + + // DSN with multiple recipients are treated as unrecognized dsns. + multidsn := makedsn(dsn.Delivered) + multidsn.Recipients = append(multidsn.Recipients, multidsn.Recipients[0]) + msgmultidsn := dsncompose(multidsn) + testIncoming(acc, msgmultidsn, "unique", false, nil) + testIncoming(accret, msgmultidsn, "unique", false, &webhook.Outgoing{ + Event: webhook.EventUnrecognized, + DSN: true, + FromID: "unique", + }) + + msgdelayed := dsncompose(makedsn(dsn.Delayed)) + testIncoming(acc, msgdelayed, "unique", false, nil) + testIncoming(accret, msgdelayed, "unique", false, &webhook.Outgoing{ + Event: webhook.EventDelayed, + DSN: true, + FromID: "unique", + SMTPCode: 554, + SMTPEnhancedCode: "5.0.0", + }) + + msgrelayed := dsncompose(makedsn(dsn.Relayed)) + testIncoming(acc, msgrelayed, "unique", false, nil) + testIncoming(accret, msgrelayed, "unique", false, &webhook.Outgoing{ + Event: webhook.EventRelayed, + DSN: true, + FromID: "unique", + SMTPCode: 554, + SMTPEnhancedCode: "5.0.0", + }) + + msgunrecognized := dsncompose(makedsn(dsn.Action("bogus"))) + testIncoming(acc, msgunrecognized, "unique", false, nil) + testIncoming(accret, msgunrecognized, "unique", false, &webhook.Outgoing{ + Event: webhook.EventUnrecognized, + DSN: true, + FromID: "unique", + }) + + // Not a DSN but to fromid address also causes "unrecognized". + msgunrecognized2 := []byte(testmsg) + testIncoming(acc, msgunrecognized2, "unique", false, nil) + testIncoming(accret, msgunrecognized2, "unique", false, &webhook.Outgoing{ + Event: webhook.EventUnrecognized, + DSN: false, + FromID: "unique", + }) + + msgdelivered := dsncompose(makedsn(dsn.Delivered)) + testIncoming(acc, msgdelivered, "unique", false, nil) + testIncoming(accret, msgdelivered, "unique", false, &webhook.Outgoing{ + Event: webhook.EventDelivered, + DSN: true, + FromID: "unique", + // This is what DSN claims. + SMTPCode: 554, + SMTPEnhancedCode: "5.0.0", + }) + + testIncoming(acc, msgfailed, "unique", false, nil) + testIncoming(accret, msgfailed, "unique", false, &webhook.Outgoing{ + Event: webhook.EventFailed, + DSN: true, + FromID: "unique", + SMTPCode: 554, + SMTPEnhancedCode: "5.0.0", + }) + + // We still have a webhook in the queue from the test above. + // Try to get the hook delivered. We'll try various error handling cases and superseding. + + qsize, err := HookQueueSize(ctxbg) + tcheck(t, err, "hook queue size") + tcompare(t, qsize, 1) + + var handler http.HandlerFunc + handleError := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusInternalServerError) + fmt.Fprintln(w, "server error") + }) + handleOK := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if r.Header.Get("Authorization") != "Basic dXNlcm5hbWU6cGFzc3dvcmQ=" { + http.Error(w, "unauthorized", http.StatusUnauthorized) + return + } + if r.Header.Get("X-Mox-Webhook-ID") == "" { + http.Error(w, "missing header x-mox-webhook-id", http.StatusBadRequest) + return + } + if r.Header.Get("X-Mox-Webhook-Attempt") == "" { + http.Error(w, "missing header x-mox-webhook-attempt", http.StatusBadRequest) + return + } + fmt.Fprintln(w, "ok") + }) + hs := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + handler.ServeHTTP(w, r) + })) + defer hs.Close() + + h, err := bstore.QueryDB[Hook](ctxbg, DB).Get() + tcheck(t, err, "get hook from queue") + + next := hookNextWork(ctxbg, pkglog, map[string]struct{}{"https://other.example/": {}}) + if next > 0 { + t.Fatalf("next scheduled work should be immediate, is %v", next) + } + + // Respond with an error and see a retry is scheduled. + h.URL = hs.URL + // Update hook URL in database, so we can call hookLaunchWork. We'll call + // hookDeliver for later attempts. + err = DB.Update(ctxbg, &h) + tcheck(t, err, "update hook url") + handler = handleError + hookLaunchWork(pkglog, map[string]struct{}{"https://other.example/": {}}) + <-hookDeliveryResults + err = DB.Get(ctxbg, &h) + tcheck(t, err, "get hook after failed delivery attempt") + tcompare(t, h.Attempts, 1) + tcompare(t, len(h.Results), 1) + tcompare(t, h.LastResult().Success, false) + tcompare(t, h.LastResult().Code, http.StatusInternalServerError) + tcompare(t, h.LastResult().Response, "server error\n") + + next = hookNextWork(ctxbg, pkglog, map[string]struct{}{}) + if next <= 0 { + t.Fatalf("next scheduled work is immediate, shoud be in the future") + } + + n, err := HookNextAttemptSet(ctxbg, HookFilter{}, time.Now().Add(time.Minute)) + tcheck(t, err, "schedule hook to now") + tcompare(t, n, 1) + n, err = HookNextAttemptAdd(ctxbg, HookFilter{}, -time.Minute) + tcheck(t, err, "schedule hook to now") + tcompare(t, n, 1) + next = hookNextWork(ctxbg, pkglog, map[string]struct{}{}) + if next > 0 { + t.Fatalf("next scheduled work should be immediate, is %v", next) + } + + handler = handleOK + hookDeliver(pkglog, h) + <-hookDeliveryResults + err = DB.Get(ctxbg, &h) + tcompare(t, err, bstore.ErrAbsent) + hr := HookRetired{ID: h.ID} + err = DB.Get(ctxbg, &hr) + tcheck(t, err, "get retired hook after delivery") + tcompare(t, hr.Attempts, 2) + tcompare(t, len(hr.Results), 2) + tcompare(t, hr.LastResult().Success, true) + tcompare(t, hr.LastResult().Code, http.StatusOK) + tcompare(t, hr.LastResult().Response, "ok\n") + + // Check that cleaning up retired webhooks works. + cleanupHookRetiredSingle(pkglog) + hrl, err := bstore.QueryDB[HookRetired](ctxbg, DB).List() + tcheck(t, err, "listing retired hooks") + tcompare(t, len(hrl), 0) + + // Helper to get a representative webhook added to the queue. + addHook := func(a *store.Account) { + testIncoming(a, msgfailed, "unique", false, &webhook.Outgoing{ + Event: webhook.EventFailed, + DSN: true, + FromID: "unique", + SMTPCode: 554, + SMTPEnhancedCode: "5.0.0", + }) + } + + // Keep attempting and failing delivery until we give up. + addHook(accret) + h, err = bstore.QueryDB[Hook](ctxbg, DB).Get() + tcheck(t, err, "get added hook") + h.URL = hs.URL + handler = handleError + for i := 0; i < len(hookIntervals); i++ { + hookDeliver(pkglog, h) + <-hookDeliveryResults + err := DB.Get(ctxbg, &h) + tcheck(t, err, "get hook") + tcompare(t, h.Attempts, i+1) + } + // Final attempt. + hookDeliver(pkglog, h) + <-hookDeliveryResults + err = DB.Get(ctxbg, &h) + tcompare(t, err, bstore.ErrAbsent) + hr = HookRetired{ID: h.ID} + err = DB.Get(ctxbg, &hr) + tcheck(t, err, "get retired hook after failure") + tcompare(t, hr.Attempts, len(hookIntervals)+1) + tcompare(t, len(hr.Results), len(hookIntervals)+1) + tcompare(t, hr.LastResult().Success, false) + tcompare(t, hr.LastResult().Code, http.StatusInternalServerError) + tcompare(t, hr.LastResult().Response, "server error\n") + + // Check account "hook" doesn't get retired webhooks. + addHook(acchook) + h, err = bstore.QueryDB[Hook](ctxbg, DB).Get() + tcheck(t, err, "get added hook") + handler = handleOK + h.URL = hs.URL + hookDeliver(pkglog, h) + <-hookDeliveryResults + err = DB.Get(ctxbg, &h) + tcompare(t, err, bstore.ErrAbsent) + hr = HookRetired{ID: h.ID} + err = DB.Get(ctxbg, &hr) + tcompare(t, err, bstore.ErrAbsent) + + // HookCancel + addHook(accret) + h, err = bstore.QueryDB[Hook](ctxbg, DB).Get() + tcheck(t, err, "get added hook") + n, err = HookCancel(ctxbg, pkglog, HookFilter{}) + tcheck(t, err, "canceling hook") + tcompare(t, n, 1) + l, err := HookList(ctxbg, HookFilter{}, HookSort{}) + tcheck(t, err, "list hook") + tcompare(t, len(l), 0) + + // Superseding: When a webhook is scheduled for a message that already has a + // pending webhook, the previous webhook should be removed/retired. + _, err = bstore.QueryDB[HookRetired](ctxbg, DB).Delete() + tcheck(t, err, "clean up retired webhooks") + _, err = bstore.QueryDB[MsgRetired](ctxbg, DB).Delete() + tcheck(t, err, "clean up retired messages") + qmr := MsgRetired{ + SenderAccount: accret.Name, + SenderLocalpart: "sender", + SenderDomainStr: "remote.example", + RecipientLocalpart: "rcpt", + RecipientDomain: path.IPDomain, + RecipientDomainStr: "mox.example", + RecipientAddress: "rcpt@mox.example", + Success: true, + KeepUntil: now.Add(time.Minute), + FromID: "unique", + } + err = DB.Insert(ctxbg, &qmr) + tcheck(t, err, "insert retired message to match") + m.RcptToLocalpart = "mjl" + m.Size = int64(len(msgdelayed)) + m.RcptToLocalpart += smtp.Localpart("+unique") + + mr := bytes.NewReader(msgdelayed) + part, err := message.EnsurePart(pkglog.Logger, true, mr, int64(len(msgdelayed))) + tcheck(t, err, "parsing message") + + // Cause first webhook. + err = Incoming(ctxbg, pkglog, accret, "", m, part, "Inbox") + tcheck(t, err, "pass incoming message") + h, err = bstore.QueryDB[Hook](ctxbg, DB).Get() + tcheck(t, err, "get hook") + + // Cause second webhook for same message. First should now be retired and marked as superseded. + err = Incoming(ctxbg, pkglog, accret, "", m, part, "Inbox") + tcheck(t, err, "pass incoming message again") + h2, err := bstore.QueryDB[Hook](ctxbg, DB).Get() + tcheck(t, err, "get hook") + hr, err = bstore.QueryDB[HookRetired](ctxbg, DB).Get() + tcheck(t, err, "get retired hook") + tcompare(t, h.ID, hr.ID) + tcompare(t, hr.SupersededByID, h2.ID) + tcompare(t, h2.ID > h.ID, true) +} + +func TestHookListFilterSort(t *testing.T) { + _, cleanup := setup(t) + defer cleanup() + err := Init() + tcheck(t, err, "queue init") + + now := time.Now().Round(0) + h := Hook{0, 0, "fromid", "messageid", "subj", nil, "mjl", "http://localhost", "", false, "delivered", "", now, 0, now, []HookResult{}} + h1 := h + h1.Submitted = now.Add(-time.Second) + h1.NextAttempt = now.Add(time.Minute) + hl := []Hook{h, h, h, h, h, h1} + err = DB.Write(ctxbg, func(tx *bstore.Tx) error { + for i := range hl { + err := hookInsert(tx, &hl[i], now, time.Minute) + tcheck(t, err, "insert hook") + } + return nil + }) + tcheck(t, err, "inserting hooks") + h1 = hl[len(hl)-1] + + hlrev := slices.Clone(hl) + slices.Reverse(hlrev) + + // Ascending by nextattempt,id. + l, err := HookList(ctxbg, HookFilter{}, HookSort{Asc: true}) + tcheck(t, err, "list") + tcompare(t, l, hl) + + // Descending by nextattempt,id. + l, err = HookList(ctxbg, HookFilter{}, HookSort{}) + tcheck(t, err, "list") + tcompare(t, l, hlrev) + + // Descending by submitted,id. + l, err = HookList(ctxbg, HookFilter{}, HookSort{Field: "Submitted"}) + tcheck(t, err, "list") + ll := append(append([]Hook{}, hlrev[1:]...), hl[5]) + tcompare(t, l, ll) + + // Filter by all fields to get a single. + allfilters := HookFilter{ + Max: 2, + IDs: []int64{h1.ID}, + Account: "mjl", + Submitted: "<1s", + NextAttempt: ">1s", + Event: "delivered", + } + l, err = HookList(ctxbg, allfilters, HookSort{}) + tcheck(t, err, "list single") + tcompare(t, l, []Hook{h1}) + + // Paginated NextAttmpt asc. + var lastID int64 + var last any + l = nil + for { + nl, err := HookList(ctxbg, HookFilter{Max: 1}, HookSort{Asc: true, LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + l = append(l, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].NextAttempt.Format(time.RFC3339Nano) + } + tcompare(t, l, hl) + + // Paginated NextAttempt desc. + l = nil + lastID = 0 + last = "" + for { + nl, err := HookList(ctxbg, HookFilter{Max: 1}, HookSort{LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + l = append(l, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].NextAttempt.Format(time.RFC3339Nano) + } + tcompare(t, l, hlrev) + + // Paginated Submitted desc. + l = nil + lastID = 0 + last = "" + for { + nl, err := HookList(ctxbg, HookFilter{Max: 1}, HookSort{Field: "Submitted", LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + l = append(l, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].Submitted.Format(time.RFC3339Nano) + } + tcompare(t, l, ll) + + // Paginated Submitted asc. + l = nil + lastID = 0 + last = "" + for { + nl, err := HookList(ctxbg, HookFilter{Max: 1}, HookSort{Field: "Submitted", Asc: true, LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + l = append(l, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].Submitted.Format(time.RFC3339Nano) + } + llrev := slices.Clone(ll) + slices.Reverse(llrev) + tcompare(t, l, llrev) + + // Retire messages and do similar but more basic tests. The code is similar. + var hrl []HookRetired + err = DB.Write(ctxbg, func(tx *bstore.Tx) error { + for _, h := range hl { + hr := h.Retired(false, h.NextAttempt, time.Now().Add(time.Minute).Round(0)) + err := tx.Insert(&hr) + tcheck(t, err, "inserting retired") + hrl = append(hrl, hr) + } + return nil + }) + tcheck(t, err, "adding retired") + + // Paginated LastActivity desc. + var lr []HookRetired + lastID = 0 + last = "" + l = nil + for { + nl, err := HookRetiredList(ctxbg, HookRetiredFilter{Max: 1}, HookRetiredSort{LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + lr = append(lr, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].LastActivity.Format(time.RFC3339Nano) + } + hrlrev := slices.Clone(hrl) + slices.Reverse(hrlrev) + tcompare(t, lr, hrlrev) + + // Filter by all fields to get a single. + allretiredfilters := HookRetiredFilter{ + Max: 2, + IDs: []int64{hrlrev[0].ID}, + Account: "mjl", + Submitted: "<1s", + LastActivity: ">1s", + Event: "delivered", + } + lr, err = HookRetiredList(ctxbg, allretiredfilters, HookRetiredSort{}) + tcheck(t, err, "list single") + tcompare(t, lr, []HookRetired{hrlrev[0]}) +} diff --git a/queue/queue.go b/queue/queue.go index 3d89017..09000bd 100644 --- a/queue/queue.go +++ b/queue/queue.go @@ -4,6 +4,7 @@ package queue import ( + "bytes" "context" "errors" "fmt" @@ -13,7 +14,7 @@ import ( "os" "path/filepath" "runtime/debug" - "sort" + "slices" "strings" "time" @@ -27,15 +28,19 @@ import ( "github.com/mjl-/mox/config" "github.com/mjl-/mox/dns" "github.com/mjl-/mox/dsn" + "github.com/mjl-/mox/message" "github.com/mjl-/mox/metrics" "github.com/mjl-/mox/mlog" "github.com/mjl-/mox/mox-" "github.com/mjl-/mox/moxio" + "github.com/mjl-/mox/publicsuffix" "github.com/mjl-/mox/smtp" "github.com/mjl-/mox/smtpclient" "github.com/mjl-/mox/store" "github.com/mjl-/mox/tlsrpt" "github.com/mjl-/mox/tlsrptdb" + "github.com/mjl-/mox/webapi" + "github.com/mjl-/mox/webhook" ) var ( @@ -71,8 +76,8 @@ var ( var jitter = mox.NewPseudoRand() -var DBTypes = []any{Msg{}, HoldRule{}} // Types stored in DB. -var DB *bstore.DB // Exported for making backups. +var DBTypes = []any{Msg{}, HoldRule{}, MsgRetired{}, webapi.Suppression{}, Hook{}, HookRetired{}} // Types stored in DB. +var DB *bstore.DB // Exported for making backups. // Allow requesting delivery starting from up to this interval from time of submission. const FutureReleaseIntervalMax = 60 * 24 * time.Hour @@ -121,23 +126,25 @@ type Msg struct { SenderLocalpart smtp.Localpart // Should be a local user and domain. SenderDomain dns.IPDomain SenderDomainStr string // For filtering, unicode. + FromID string // For transactional messages, used to match later DSNs. RecipientLocalpart smtp.Localpart // Typically a remote user and domain. RecipientDomain dns.IPDomain - RecipientDomainStr string // For filtering, unicode. + RecipientDomainStr string // For filtering, unicode domain. Can also contain ip enclosed in []. Attempts int // Next attempt is based on last attempt and exponential back off based on attempts. MaxAttempts int // Max number of attempts before giving up. If 0, then the default of 8 attempts is used instead. DialedIPs map[string][]net.IP // For each host, the IPs that were dialed. Used for IP selection for later attempts. NextAttempt time.Time // For scheduling. LastAttempt *time.Time - LastError string + Results []MsgResult Has8bit bool // Whether message contains bytes with high bit set, determines whether 8BITMIME SMTP extension is needed. SMTPUTF8 bool // Whether message requires use of SMTPUTF8. IsDMARCReport bool // Delivery failures for DMARC reports are handled differently. IsTLSReport bool // Delivery failures for TLS reports are handled differently. Size int64 // Full size of message, combined MsgPrefix with contents of message file. - MessageID string // Used when composing a DSN, in its References header. - MsgPrefix []byte + MessageID string // Message-ID header, including <>. Used when composing a DSN, in its References header. + MsgPrefix []byte // Data to send before the contents from the file, typically with headers like DKIM-Signature. + Subject string // For context about delivery. // If set, this message is a DSN and this is a version using utf-8, for the case // the remote MTA supports smtputf8. In this case, Size and MsgPrefix are not @@ -169,6 +176,43 @@ type Msg struct { // utc date-time. FutureReleaseRequest string // ../rfc/4865:305 + + Extra map[string]string // Extra information, for transactional email. +} + +// MsgResult is the result (or work in progress) of a delivery attempt. +type MsgResult struct { + Start time.Time + Duration time.Duration + Success bool + Code int + Secode string + Error string + // todo: store smtp trace for failed deliveries for debugging, perhaps also for successful deliveries. +} + +// Stored in MsgResult.Error while delivery is in progress. Replaced after success/error. +const resultErrorDelivering = "delivering..." + +// markResult updates/adds a delivery result. +func (m *Msg) markResult(code int, secode string, errmsg string, success bool) { + if len(m.Results) == 0 || m.Results[len(m.Results)-1].Error != resultErrorDelivering { + m.Results = append(m.Results, MsgResult{Start: time.Now()}) + } + result := &m.Results[len(m.Results)-1] + result.Duration = time.Since(result.Start) + result.Code = code + result.Secode = secode + result.Error = errmsg + result.Success = false +} + +// LastResult returns the last result entry, or an empty result. +func (m *Msg) LastResult() MsgResult { + if len(m.Results) == 0 { + return MsgResult{Start: time.Now()} + } + return m.Results[len(m.Results)-1] } // Sender of message as used in MAIL FROM. @@ -186,6 +230,114 @@ func (m Msg) MessagePath() string { return mox.DataDirPath(filepath.Join("queue", store.MessagePath(m.ID))) } +// todo: store which transport (if any) was actually used in MsgResult, based on routes. + +// Retired returns a MsgRetired for the message, for history of deliveries. +func (m Msg) Retired(success bool, t, keepUntil time.Time) MsgRetired { + return MsgRetired{ + ID: m.ID, + BaseID: m.BaseID, + Queued: m.Queued, + SenderAccount: m.SenderAccount, + SenderLocalpart: m.SenderLocalpart, + SenderDomainStr: m.SenderDomainStr, + FromID: m.FromID, + RecipientLocalpart: m.RecipientLocalpart, + RecipientDomain: m.RecipientDomain, + RecipientDomainStr: m.RecipientDomainStr, + Attempts: m.Attempts, + MaxAttempts: m.MaxAttempts, + DialedIPs: m.DialedIPs, + LastAttempt: m.LastAttempt, + Results: m.Results, + Has8bit: m.Has8bit, + SMTPUTF8: m.SMTPUTF8, + IsDMARCReport: m.IsDMARCReport, + IsTLSReport: m.IsTLSReport, + Size: m.Size, + MessageID: m.MessageID, + Subject: m.Subject, + Transport: m.Transport, + RequireTLS: m.RequireTLS, + FutureReleaseRequest: m.FutureReleaseRequest, + Extra: m.Extra, + + RecipientAddress: smtp.Path{Localpart: m.RecipientLocalpart, IPDomain: m.RecipientDomain}.XString(true), + Success: success, + LastActivity: t, + KeepUntil: keepUntil, + } +} + +// MsgRetired is a message for which delivery completed, either successful, +// failed/canceled. Retired messages are only stored if so configured, and will be +// cleaned up after the configured period. +type MsgRetired struct { + ID int64 // Same ID as it was as Msg.ID. + + BaseID int64 + Queued time.Time + SenderAccount string // Failures are delivered back to this local account. Also used for routing. + SenderLocalpart smtp.Localpart // Should be a local user and domain. + SenderDomainStr string // For filtering, unicode. + FromID string `bstore:"index"` // Used to match DSNs. + RecipientLocalpart smtp.Localpart // Typically a remote user and domain. + RecipientDomain dns.IPDomain + RecipientDomainStr string // For filtering, unicode. + Attempts int // Next attempt is based on last attempt and exponential back off based on attempts. + MaxAttempts int // Max number of attempts before giving up. If 0, then the default of 8 attempts is used instead. + DialedIPs map[string][]net.IP // For each host, the IPs that were dialed. Used for IP selection for later attempts. + LastAttempt *time.Time + Results []MsgResult + + Has8bit bool // Whether message contains bytes with high bit set, determines whether 8BITMIME SMTP extension is needed. + SMTPUTF8 bool // Whether message requires use of SMTPUTF8. + IsDMARCReport bool // Delivery failures for DMARC reports are handled differently. + IsTLSReport bool // Delivery failures for TLS reports are handled differently. + Size int64 // Full size of message, combined MsgPrefix with contents of message file. + MessageID string // Used when composing a DSN, in its References header. + Subject string // For context about delivery. + + Transport string + RequireTLS *bool + FutureReleaseRequest string + + Extra map[string]string // Extra information, for transactional email. + + LastActivity time.Time `bstore:"index"` + RecipientAddress string `bstore:"index RecipientAddress+LastActivity"` + Success bool // Whether delivery to next hop succeeded. + KeepUntil time.Time `bstore:"index"` +} + +// Sender of message as used in MAIL FROM. +func (m MsgRetired) Sender() (path smtp.Path, err error) { + path.Localpart = m.RecipientLocalpart + if strings.HasPrefix(m.SenderDomainStr, "[") && strings.HasSuffix(m.SenderDomainStr, "]") { + s := m.SenderDomainStr[1 : len(m.SenderDomainStr)-1] + path.IPDomain.IP = net.ParseIP(s) + if path.IPDomain.IP == nil { + err = fmt.Errorf("parsing ip address %q: %v", s, err) + } + } else { + path.IPDomain.Domain, err = dns.ParseDomain(m.SenderDomainStr) + } + return +} + +// Recipient of message as used in RCPT TO. +func (m MsgRetired) Recipient() smtp.Path { + return smtp.Path{Localpart: m.RecipientLocalpart, IPDomain: m.RecipientDomain} +} + +// LastResult returns the last result entry, or an empty result. +func (m MsgRetired) LastResult() MsgResult { + if len(m.Results) == 0 { + return MsgResult{} + } + return m.Results[len(m.Results)-1] +} + // Init opens the queue database without starting delivery. func Init() error { qpath := mox.DataDirPath(filepath.FromSlash("queue/index.db")) @@ -197,24 +349,29 @@ func Init() error { var err error DB, err = bstore.Open(mox.Shutdown, qpath, &bstore.Options{Timeout: 5 * time.Second, Perm: 0660}, DBTypes...) + if err == nil { + err = DB.Read(mox.Shutdown, func(tx *bstore.Tx) error { + return metricHoldUpdate(tx) + }) + } if err != nil { if isNew { os.Remove(qpath) } return fmt.Errorf("open queue database: %s", err) } - metricHoldUpdate() return nil } // When we update the gauge, we just get the full current value, not try to account // for adds/removes. -func metricHoldUpdate() { - count, err := bstore.QueryDB[Msg](context.Background(), DB).FilterNonzero(Msg{Hold: true}).Count() +func metricHoldUpdate(tx *bstore.Tx) error { + count, err := bstore.QueryTx[Msg](tx).FilterNonzero(Msg{Hold: true}).Count() if err != nil { - mlog.New("queue", nil).Errorx("querying number of queued messages that are on hold", err) + return fmt.Errorf("querying messages on hold for metric: %v", err) } metricHold.Set(float64(count)) + return nil } // Shutdown closes the queue database. The delivery process isn't stopped. For tests only. @@ -226,12 +383,15 @@ func Shutdown() { DB = nil } +// todo: the filtering & sorting can use improvements. too much duplicated code (variants between {Msg,Hook}{,Retired}. Sort has pagination fields, some untyped. + // Filter filters messages to list or operate on. Used by admin web interface // and cli. // // Only non-empty/non-zero values are applied to the filter. Leaving all fields // empty/zero matches all messages. type Filter struct { + Max int IDs []int64 Account string From string @@ -254,7 +414,7 @@ func (f Filter) apply(q *bstore.Query[Msg]) error { } else if !strings.HasPrefix(s, ">") { return fmt.Errorf(`must start with "<" for before or ">" for after a duration`) } - s = s[1:] + s = strings.TrimSpace(s[1:]) var t time.Time if s == "now" { t = time.Now() @@ -294,35 +454,82 @@ func (f Filter) apply(q *bstore.Query[Msg]) error { return f.From != "" && strings.Contains(m.Sender().XString(true), f.From) || f.To != "" && strings.Contains(m.Recipient().XString(true), f.To) }) } + if f.Max != 0 { + q.Limit(f.Max) + } return nil } -// List returns all messages in the delivery queue. -// Ordered by earliest delivery attempt first. -func List(ctx context.Context, f Filter) ([]Msg, error) { +type Sort struct { + Field string // "Queued" or "NextAttempt"/"". + LastID int64 // If > 0, we return objects beyond this, less/greater depending on Asc. + Last any // Value of Field for last object. Must be set iff LastID is set. + Asc bool // Ascending, or descending. +} + +func (s Sort) apply(q *bstore.Query[Msg]) error { + switch s.Field { + case "", "NextAttempt": + s.Field = "NextAttempt" + case "Queued": + s.Field = "Queued" + default: + return fmt.Errorf("unknown sort order field %q", s.Field) + } + + if s.LastID > 0 { + ls, ok := s.Last.(string) + if !ok { + return fmt.Errorf("last should be string with time, not %T %q", s.Last, s.Last) + } + last, err := time.Parse(time.RFC3339Nano, ls) + if err != nil { + last, err = time.Parse(time.RFC3339, ls) + } + if err != nil { + return fmt.Errorf("parsing last %q as time: %v", s.Last, err) + } + q.FilterNotEqual("ID", s.LastID) + var fieldEqual func(m Msg) bool + if s.Field == "NextAttempt" { + fieldEqual = func(m Msg) bool { return m.NextAttempt == last } + } else { + fieldEqual = func(m Msg) bool { return m.Queued == last } + } + if s.Asc { + q.FilterGreaterEqual(s.Field, last) + q.FilterFn(func(m Msg) bool { + return !fieldEqual(m) || m.ID > s.LastID + }) + } else { + q.FilterLessEqual(s.Field, last) + q.FilterFn(func(m Msg) bool { + return !fieldEqual(m) || m.ID < s.LastID + }) + } + } + if s.Asc { + q.SortAsc(s.Field, "ID") + } else { + q.SortDesc(s.Field, "ID") + } + return nil +} + +// List returns max 100 messages matching filter in the delivery queue. +// By default, orders by next delivery attempt. +func List(ctx context.Context, filter Filter, sort Sort) ([]Msg, error) { q := bstore.QueryDB[Msg](ctx, DB) - if err := f.apply(q); err != nil { + if err := filter.apply(q); err != nil { + return nil, err + } + if err := sort.apply(q); err != nil { return nil, err } qmsgs, err := q.List() if err != nil { return nil, err } - sort.Slice(qmsgs, func(i, j int) bool { - a := qmsgs[i] - b := qmsgs[j] - la := a.LastAttempt != nil - lb := b.LastAttempt != nil - if !la && lb { - return true - } else if la && !lb { - return false - } - if !la && !lb || a.LastAttempt.Equal(*b.LastAttempt) { - return a.ID < b.ID - } - return a.LastAttempt.Before(*b.LastAttempt) - }) return qmsgs, nil } @@ -339,6 +546,7 @@ func HoldRuleList(ctx context.Context) ([]HoldRule, error) { // HoldRuleAdd adds a new hold rule causing newly submitted messages to be marked // as "on hold", and existing matching messages too. func HoldRuleAdd(ctx context.Context, log mlog.Log, hr HoldRule) (HoldRule, error) { + var n int err := DB.Write(ctx, func(tx *bstore.Tx) error { hr.ID = 0 hr.SenderDomainStr = hr.SenderDomain.Name() @@ -356,18 +564,18 @@ func HoldRuleAdd(ctx context.Context, log mlog.Log, hr HoldRule) (HoldRule, erro RecipientDomainStr: hr.RecipientDomainStr, }) } - n, err := q.UpdateField("Hold", true) + var err error + n, err = q.UpdateField("Hold", true) if err != nil { return fmt.Errorf("marking existing matching messages in queue on hold: %v", err) } - log.Info("marked messages in queue as on hold", slog.Int("messages", n)) - return nil + return metricHoldUpdate(tx) }) if err != nil { return HoldRule{}, err } - queuekick() - metricHoldUpdate() + log.Info("marked messages in queue as on hold", slog.Int("messages", n)) + msgqueueKick() return hr, nil } @@ -385,7 +593,8 @@ func HoldRuleRemove(ctx context.Context, log mlog.Log, holdRuleID int64) error { } // MakeMsg is a convenience function that sets the commonly used fields for a Msg. -func MakeMsg(sender, recipient smtp.Path, has8bit, smtputf8 bool, size int64, messageID string, prefix []byte, requireTLS *bool, next time.Time) Msg { +// messageID should include <>. +func MakeMsg(sender, recipient smtp.Path, has8bit, smtputf8 bool, size int64, messageID string, prefix []byte, requireTLS *bool, next time.Time, subject string) Msg { return Msg{ SenderLocalpart: sender.Localpart, SenderDomain: sender.IPDomain, @@ -396,26 +605,30 @@ func MakeMsg(sender, recipient smtp.Path, has8bit, smtputf8 bool, size int64, me Size: size, MessageID: messageID, MsgPrefix: prefix, + Subject: subject, RequireTLS: requireTLS, Queued: time.Now(), NextAttempt: next, } } -// Add one or more new messages to the queue. They'll get the same BaseID, so they -// can be delivered in a single SMTP transaction, with a single DATA command, but -// may be split into multiple transactions if errors/limits are encountered. The -// queue is kicked immediately to start a first delivery attempt. +// Add one or more new messages to the queue. If the sender paths and MsgPrefix are +// identical, they'll get the same BaseID, so they can be delivered in a single +// SMTP transaction, with a single DATA command, but may be split into multiple +// transactions if errors/limits are encountered. The queue is kicked immediately +// to start a first delivery attempt. // // ID of the messagse must be 0 and will be set after inserting in the queue. // // Add sets derived fields like SenderDomainStr and RecipientDomainStr, and fields -// related to queueing, such as Queued, NextAttempt, LastAttempt, LastError. +// related to queueing, such as Queued, NextAttempt. func Add(ctx context.Context, log mlog.Log, senderAccount string, msgFile *os.File, qml ...Msg) error { if len(qml) == 0 { return fmt.Errorf("must queue at least one message") } + base := true + for i, qm := range qml { if qm.ID != 0 { return fmt.Errorf("id of queued messages must be 0") @@ -423,38 +636,9 @@ func Add(ctx context.Context, log mlog.Log, senderAccount string, msgFile *os.Fi // Sanity check, internal consistency. qml[i].SenderDomainStr = formatIPDomain(qm.SenderDomain) qml[i].RecipientDomainStr = formatIPDomain(qm.RecipientDomain) - } - - if Localserve { - if senderAccount == "" { - return fmt.Errorf("cannot queue with localserve without local account") + if base && i > 0 && qm.Sender().String() != qml[0].Sender().String() || !bytes.Equal(qm.MsgPrefix, qml[0].MsgPrefix) { + base = false } - acc, err := store.OpenAccount(log, senderAccount) - if err != nil { - return fmt.Errorf("opening sender account for immediate delivery with localserve: %v", err) - } - defer func() { - err := acc.Close() - log.Check(err, "closing account") - }() - conf, _ := acc.Conf() - err = nil - acc.WithWLock(func() { - for i, qm := range qml { - qml[i].SenderAccount = senderAccount - m := store.Message{Size: qm.Size, MsgPrefix: qm.MsgPrefix} - dest := conf.Destinations[qm.Sender().String()] - err = acc.DeliverDestination(log, dest, &m, msgFile) - if err != nil { - err = fmt.Errorf("delivering message: %v", err) - return // Returned again outside WithWLock. - } - } - }) - if err == nil { - log.Debug("immediately delivered from queue to sender") - } - return err } tx, err := DB.Begin(ctx, true) @@ -475,8 +659,9 @@ func Add(ctx context.Context, log mlog.Log, senderAccount string, msgFile *os.Fi return fmt.Errorf("getting queue hold rules") } - // Insert messages into queue. If there are multiple messages, they all get a - // non-zero BaseID that is the Msg.ID of the first message inserted. + // Insert messages into queue. If multiple messages are to be delivered in a single + // transaction, they all get a non-zero BaseID that is the Msg.ID of the first + // message inserted. var baseID int64 for i := range qml { qml[i].SenderAccount = senderAccount @@ -490,7 +675,7 @@ func Add(ctx context.Context, log mlog.Log, senderAccount string, msgFile *os.Fi if err := tx.Insert(&qml[i]); err != nil { return err } - if i == 0 && len(qml) > 1 { + if base && i == 0 && len(qml) > 1 { baseID = qml[i].ID qml[i].BaseID = baseID if err := tx.Update(&qml[i]); err != nil { @@ -519,20 +704,22 @@ func Add(ctx context.Context, log mlog.Log, senderAccount string, msgFile *os.Fi } } + for _, m := range qml { + if m.Hold { + if err := metricHoldUpdate(tx); err != nil { + return err + } + break + } + } + if err := tx.Commit(); err != nil { return fmt.Errorf("commit transaction: %s", err) } tx = nil paths = nil - for _, m := range qml { - if m.Hold { - metricHoldUpdate() - break - } - } - - queuekick() + msgqueueKick() return nil } @@ -545,26 +732,30 @@ func formatIPDomain(d dns.IPDomain) string { } var ( - kick = make(chan struct{}, 1) + msgqueue = make(chan struct{}, 1) deliveryResults = make(chan string, 1) ) -func queuekick() { +func kick() { + msgqueueKick() + hookqueueKick() +} + +func msgqueueKick() { select { - case kick <- struct{}{}: + case msgqueue <- struct{}{}: default: } } // NextAttemptAdd adds a duration to the NextAttempt for all matching messages, and // kicks the queue. -func NextAttemptAdd(ctx context.Context, f Filter, d time.Duration) (affected int, err error) { +func NextAttemptAdd(ctx context.Context, filter Filter, d time.Duration) (affected int, err error) { err = DB.Write(ctx, func(tx *bstore.Tx) error { - q := bstore.QueryDB[Msg](ctx, DB) - if err := f.apply(q); err != nil { + q := bstore.QueryTx[Msg](tx) + if err := filter.apply(q); err != nil { return err } - var msgs []Msg msgs, err := q.List() if err != nil { return fmt.Errorf("listing matching messages: %v", err) @@ -581,122 +772,285 @@ func NextAttemptAdd(ctx context.Context, f Filter, d time.Duration) (affected in if err != nil { return 0, err } - queuekick() + msgqueueKick() return affected, nil } // NextAttemptSet sets NextAttempt for all matching messages to a new time, and // kicks the queue. -func NextAttemptSet(ctx context.Context, f Filter, t time.Time) (affected int, err error) { +func NextAttemptSet(ctx context.Context, filter Filter, t time.Time) (affected int, err error) { q := bstore.QueryDB[Msg](ctx, DB) - if err := f.apply(q); err != nil { + if err := filter.apply(q); err != nil { return 0, err } n, err := q.UpdateNonzero(Msg{NextAttempt: t}) if err != nil { return 0, fmt.Errorf("selecting and updating messages in queue: %v", err) } - queuekick() + msgqueueKick() return n, nil } // HoldSet sets Hold for all matching messages and kicks the queue. -func HoldSet(ctx context.Context, f Filter, hold bool) (affected int, err error) { - q := bstore.QueryDB[Msg](ctx, DB) - if err := f.apply(q); err != nil { +func HoldSet(ctx context.Context, filter Filter, hold bool) (affected int, err error) { + err = DB.Write(ctx, func(tx *bstore.Tx) error { + q := bstore.QueryTx[Msg](tx) + if err := filter.apply(q); err != nil { + return err + } + n, err := q.UpdateFields(map[string]any{"Hold": hold}) + if err != nil { + return fmt.Errorf("selecting and updating messages in queue: %v", err) + } + affected = n + return metricHoldUpdate(tx) + }) + if err != nil { return 0, err } - n, err := q.UpdateFields(map[string]any{"Hold": hold}) - if err != nil { - return 0, fmt.Errorf("selecting and updating messages in queue: %v", err) - } - queuekick() - metricHoldUpdate() - return n, nil + msgqueueKick() + return affected, nil } // TransportSet changes the transport to use for the matching messages. -func TransportSet(ctx context.Context, f Filter, transport string) (affected int, err error) { +func TransportSet(ctx context.Context, filter Filter, transport string) (affected int, err error) { q := bstore.QueryDB[Msg](ctx, DB) - if err := f.apply(q); err != nil { + if err := filter.apply(q); err != nil { return 0, err } n, err := q.UpdateFields(map[string]any{"Transport": transport}) if err != nil { return 0, fmt.Errorf("selecting and updating messages in queue: %v", err) } - queuekick() + msgqueueKick() return n, nil } -// Fail marks matching messages as failed for delivery and delivers DSNs to the sender. +// Fail marks matching messages as failed for delivery, delivers a DSN to the +// sender, and sends a webhook. +// +// Returns number of messages removed, which can be non-zero even in case of an +// error. func Fail(ctx context.Context, log mlog.Log, f Filter) (affected int, err error) { + return failDrop(ctx, log, f, true) +} + +// Drop removes matching messages from the queue. Messages are added as retired +// message, webhooks with the "canceled" event are queued. +// +// Returns number of messages removed, which can be non-zero even in case of an +// error. +func Drop(ctx context.Context, log mlog.Log, f Filter) (affected int, err error) { + return failDrop(ctx, log, f, false) +} + +func failDrop(ctx context.Context, log mlog.Log, filter Filter, fail bool) (affected int, err error) { + var msgs []Msg err = DB.Write(ctx, func(tx *bstore.Tx) error { q := bstore.QueryTx[Msg](tx) - if err := f.apply(q); err != nil { + if err := filter.apply(q); err != nil { return err } - var msgs []Msg - q.Gather(&msgs) - n, err := q.Delete() + var err error + msgs, err = q.List() if err != nil { - return fmt.Errorf("selecting and deleting messages from queue: %v", err) + return fmt.Errorf("getting messages to delete: %v", err) } - var remoteMTA dsn.NameIP - for _, m := range msgs { - if m.LastAttempt == nil { - now := time.Now() - m.LastAttempt = &now - } - deliverDSNFailure(ctx, log, m, remoteMTA, "", "delivery canceled by admin", nil) + if len(msgs) == 0 { + return nil } - affected = n - return nil + + now := time.Now() + var remoteMTA dsn.NameIP + for i := range msgs { + result := MsgResult{ + Start: now, + Error: "delivery canceled by admin", + } + msgs[i].Results = append(msgs[i].Results, result) + if fail { + if msgs[i].LastAttempt == nil { + msgs[i].LastAttempt = &now + } + deliverDSNFailure(log, msgs[i], remoteMTA, "", result.Error, nil) + } + } + event := webhook.EventCanceled + if fail { + event = webhook.EventFailed + } + if err := retireMsgs(log, tx, event, 0, "", nil, msgs...); err != nil { + return fmt.Errorf("removing queue messages from database: %w", err) + } + return metricHoldUpdate(tx) }) if err != nil { - return 0, fmt.Errorf("selecting and updating messages in queue: %v", err) - } - queuekick() - metricHoldUpdate() - return affected, nil -} - -// Drop removes matching messages from the queue. -// Returns number of messages removed. -func Drop(ctx context.Context, log mlog.Log, f Filter) (affected int, err error) { - q := bstore.QueryDB[Msg](ctx, DB) - if err := f.apply(q); err != nil { return 0, err } - var msgs []Msg - q.Gather(&msgs) - n, err := q.Delete() - if err != nil { - return 0, fmt.Errorf("selecting and deleting messages from queue: %v", err) - } - for _, m := range msgs { - p := m.MessagePath() - if err := os.Remove(p); err != nil { - log.Errorx("removing queue message from file system", err, slog.Int64("queuemsgid", m.ID), slog.String("path", p)) + if len(msgs) > 0 { + if err := removeMsgsFS(log, msgs...); err != nil { + return len(msgs), fmt.Errorf("removing queue messages from file system: %w", err) } } - queuekick() - metricHoldUpdate() - return n, nil + kick() + return len(msgs), nil } // RequireTLSSet updates the RequireTLS field of matching messages. -func RequireTLSSet(ctx context.Context, f Filter, requireTLS *bool) (affected int, err error) { +func RequireTLSSet(ctx context.Context, filter Filter, requireTLS *bool) (affected int, err error) { q := bstore.QueryDB[Msg](ctx, DB) - if err := f.apply(q); err != nil { + if err := filter.apply(q); err != nil { return 0, err } n, err := q.UpdateFields(map[string]any{"RequireTLS": requireTLS}) - queuekick() + msgqueueKick() return n, err } +// RetiredFilter filters messages to list or operate on. Used by admin web interface +// and cli. +// +// Only non-empty/non-zero values are applied to the filter. Leaving all fields +// empty/zero matches all messages. +type RetiredFilter struct { + Max int + IDs []int64 + Account string + From string + To string + Submitted string // Whether submitted before/after a time relative to now. ">$duration" or "<$duration", also with "now" for duration. + LastActivity string // ">$duration" or "<$duration", also with "now" for duration. + Transport *string + Success *bool +} + +func (f RetiredFilter) apply(q *bstore.Query[MsgRetired]) error { + if len(f.IDs) > 0 { + q.FilterIDs(f.IDs) + } + applyTime := func(field string, s string) error { + orig := s + var before bool + if strings.HasPrefix(s, "<") { + before = true + } else if !strings.HasPrefix(s, ">") { + return fmt.Errorf(`must start with "<" for before or ">" for after a duration`) + } + s = strings.TrimSpace(s[1:]) + var t time.Time + if s == "now" { + t = time.Now() + } else if d, err := time.ParseDuration(s); err != nil { + return fmt.Errorf("parsing duration %q: %v", orig, err) + } else { + t = time.Now().Add(d) + } + if before { + q.FilterLess(field, t) + } else { + q.FilterGreater(field, t) + } + return nil + } + if f.Submitted != "" { + if err := applyTime("Queued", f.Submitted); err != nil { + return fmt.Errorf("applying filter for submitted: %v", err) + } + } + if f.LastActivity != "" { + if err := applyTime("LastActivity", f.LastActivity); err != nil { + return fmt.Errorf("applying filter for last activity: %v", err) + } + } + if f.Account != "" { + q.FilterNonzero(MsgRetired{SenderAccount: f.Account}) + } + if f.Transport != nil { + q.FilterEqual("Transport", *f.Transport) + } + if f.From != "" || f.To != "" { + q.FilterFn(func(m MsgRetired) bool { + return f.From != "" && strings.Contains(m.SenderLocalpart.String()+"@"+m.SenderDomainStr, f.From) || f.To != "" && strings.Contains(m.Recipient().XString(true), f.To) + }) + } + if f.Success != nil { + q.FilterEqual("Success", *f.Success) + } + if f.Max != 0 { + q.Limit(f.Max) + } + return nil +} + +type RetiredSort struct { + Field string // "Queued" or "LastActivity"/"". + LastID int64 // If > 0, we return objects beyond this, less/greater depending on Asc. + Last any // Value of Field for last object. Must be set iff LastID is set. + Asc bool // Ascending, or descending. +} + +func (s RetiredSort) apply(q *bstore.Query[MsgRetired]) error { + switch s.Field { + case "", "LastActivity": + s.Field = "LastActivity" + case "Queued": + s.Field = "Queued" + default: + return fmt.Errorf("unknown sort order field %q", s.Field) + } + + if s.LastID > 0 { + ls, ok := s.Last.(string) + if !ok { + return fmt.Errorf("last should be string with time, not %T %q", s.Last, s.Last) + } + last, err := time.Parse(time.RFC3339Nano, ls) + if err != nil { + last, err = time.Parse(time.RFC3339, ls) + } + if err != nil { + return fmt.Errorf("parsing last %q as time: %v", s.Last, err) + } + q.FilterNotEqual("ID", s.LastID) + var fieldEqual func(m MsgRetired) bool + if s.Field == "LastActivity" { + fieldEqual = func(m MsgRetired) bool { return m.LastActivity == last } + } else { + fieldEqual = func(m MsgRetired) bool { return m.Queued == last } + } + if s.Asc { + q.FilterGreaterEqual(s.Field, last) + q.FilterFn(func(mr MsgRetired) bool { + return !fieldEqual(mr) || mr.ID > s.LastID + }) + } else { + q.FilterLessEqual(s.Field, last) + q.FilterFn(func(mr MsgRetired) bool { + return !fieldEqual(mr) || mr.ID < s.LastID + }) + } + } + if s.Asc { + q.SortAsc(s.Field, "ID") + } else { + q.SortDesc(s.Field, "ID") + } + return nil +} + +// RetiredList returns retired messages. +func RetiredList(ctx context.Context, filter RetiredFilter, sort RetiredSort) ([]MsgRetired, error) { + q := bstore.QueryDB[MsgRetired](ctx, DB) + if err := filter.apply(q); err != nil { + return nil, err + } + if err := sort.apply(q); err != nil { + return nil, err + } + return q.List() +} + type ReadReaderAtCloser interface { io.ReadCloser io.ReaderAt @@ -718,42 +1072,85 @@ func OpenMessage(ctx context.Context, id int64) (ReadReaderAtCloser, error) { } const maxConcurrentDeliveries = 10 +const maxConcurrentHookDeliveries = 10 -// Start opens the database by calling Init, then starts the delivery process. +// Start opens the database by calling Init, then starts the delivery and cleanup +// processes. func Start(resolver dns.Resolver, done chan struct{}) error { if err := Init(); err != nil { return err } + go startQueue(resolver, done) + go startHookQueue(done) + + go cleanupMsgRetired(done) + go cleanupHookRetired(done) + + return nil +} + +func cleanupMsgRetired(done chan struct{}) { log := mlog.New("queue", nil) - // High-level delivery strategy advice: ../rfc/5321:3685 - go func() { - // Map keys are either dns.Domain.Name()'s, or string-formatted IP addresses. - busyDomains := map[string]struct{}{} - - timer := time.NewTimer(0) - - for { - select { - case <-mox.Shutdown.Done(): - done <- struct{}{} - return - case <-kick: - case <-timer.C: - case domain := <-deliveryResults: - delete(busyDomains, domain) - } - - if len(busyDomains) >= maxConcurrentDeliveries { - continue - } - - launchWork(log, resolver, busyDomains) - timer.Reset(nextWork(mox.Shutdown, log, busyDomains)) + defer func() { + x := recover() + if x != nil { + log.Error("unhandled panic in cleanupMsgRetired", slog.Any("x", x)) + debug.PrintStack() + metrics.PanicInc(metrics.Queue) } }() - return nil + + timer := time.NewTimer(3 * time.Second) + for { + select { + case <-mox.Shutdown.Done(): + done <- struct{}{} + return + case <-timer.C: + } + + cleanupMsgRetiredSingle(log) + timer.Reset(time.Hour) + } +} + +func cleanupMsgRetiredSingle(log mlog.Log) { + n, err := bstore.QueryDB[MsgRetired](mox.Shutdown, DB).FilterLess("KeepUntil", time.Now()).Delete() + log.Check(err, "removing old retired messages") + if n > 0 { + log.Debug("cleaned up retired messages", slog.Int("count", n)) + } +} + +func startQueue(resolver dns.Resolver, done chan struct{}) { + // High-level delivery strategy advice: ../rfc/5321:3685 + log := mlog.New("queue", nil) + + // Map keys are either dns.Domain.Name()'s, or string-formatted IP addresses. + busyDomains := map[string]struct{}{} + + timer := time.NewTimer(0) + + for { + select { + case <-mox.Shutdown.Done(): + done <- struct{}{} + return + case <-msgqueue: + case <-timer.C: + case domain := <-deliveryResults: + delete(busyDomains, domain) + } + + if len(busyDomains) >= maxConcurrentDeliveries { + continue + } + + launchWork(log, resolver, busyDomains) + timer.Reset(nextWork(mox.Shutdown, log, busyDomains)) + } } func nextWork(ctx context.Context, log mlog.Log, busyDomains map[string]struct{}) time.Duration { @@ -814,24 +1211,12 @@ func launchWork(log mlog.Log, resolver dns.Resolver, busyDomains map[string]stru return len(msgs) } -// Remove message from queue in database and file system. -func queueDelete(ctx context.Context, msgIDs ...int64) error { - err := DB.Write(ctx, func(tx *bstore.Tx) error { - for _, id := range msgIDs { - if err := tx.Delete(&Msg{ID: id}); err != nil { - return err - } - } - return nil - }) - if err != nil { - return err - } - // If removing from database fails, we'll also leave the file in the file system. +// todo future: we may consider keeping message files around for a while after retiring. especially for failures to deliver. to inspect what exactly wasn't delivered. +func removeMsgsFS(log mlog.Log, msgs ...Msg) error { var errs []string - for _, id := range msgIDs { - p := mox.DataDirPath(filepath.Join("queue", store.MessagePath(id))) + for _, m := range msgs { + p := mox.DataDirPath(filepath.Join("queue", store.MessagePath(m.ID))) if err := os.Remove(p); err != nil { errs = append(errs, fmt.Sprintf("%s: %v", p, err)) } @@ -842,51 +1227,169 @@ func queueDelete(ctx context.Context, msgIDs ...int64) error { return nil } +// Move one or more messages to retire list or remove it. Webhooks are scheduled. +// IDs of msgs in suppressedMsgIDs caused a suppression to be added. +// +// Callers should update Msg.Results before calling. +// +// Callers must remove the messages from the file system afterwards, see +// removeMsgsFS. Callers must also kick the message and webhook queues. +func retireMsgs(log mlog.Log, tx *bstore.Tx, event webhook.OutgoingEvent, code int, secode string, suppressedMsgIDs []int64, msgs ...Msg) error { + now := time.Now() + + var hooks []Hook + m0 := msgs[0] + accConf, ok := mox.Conf.Account(m0.SenderAccount) + var hookURL string + if accConf.OutgoingWebhook != nil { + hookURL = accConf.OutgoingWebhook.URL + } + log.Debug("retiring messages from queue", slog.Any("event", event), slog.String("account", m0.SenderAccount), slog.Bool("ok", ok), slog.String("webhookurl", hookURL)) + if hookURL != "" && (len(accConf.OutgoingWebhook.Events) == 0 || slices.Contains(accConf.OutgoingWebhook.Events, string(event))) { + for _, m := range msgs { + suppressing := slices.Contains(suppressedMsgIDs, m.ID) + h, err := hookCompose(m, hookURL, accConf.OutgoingWebhook.Authorization, event, suppressing, code, secode) + if err != nil { + log.Errorx("composing webhooks while retiring messages from queue, not queueing hook for message", err, slog.Int64("msgid", m.ID), slog.Any("recipient", m.Recipient())) + } else { + hooks = append(hooks, h) + } + } + } + + msgKeep := 24 * 7 * time.Hour + hookKeep := 24 * 7 * time.Hour + if ok { + msgKeep = accConf.KeepRetiredMessagePeriod + hookKeep = accConf.KeepRetiredWebhookPeriod + } + + for _, m := range msgs { + if err := tx.Delete(&m); err != nil { + return err + } + } + if msgKeep > 0 { + for _, m := range msgs { + rm := m.Retired(event == webhook.EventDelivered, now, now.Add(msgKeep)) + if err := tx.Insert(&rm); err != nil { + return err + } + } + } + + for i := range hooks { + if err := hookInsert(tx, &hooks[i], now, hookKeep); err != nil { + return fmt.Errorf("enqueueing webhooks while retiring messages from queue: %v", err) + } + } + + if len(hooks) > 0 { + for _, h := range hooks { + log.Debug("queued webhook while retiring message from queue", h.attrs()...) + } + hookqueueKick() + } + return nil +} + // deliver attempts to deliver a message. // The queue is updated, either by removing a delivered or permanently failed // message, or updating the time for the next attempt. A DSN may be sent. -func deliver(log mlog.Log, resolver dns.Resolver, m Msg) { +func deliver(log mlog.Log, resolver dns.Resolver, m0 Msg) { ctx := mox.Shutdown qlog := log.WithCid(mox.Cid()).With( - slog.Any("from", m.Sender()), - slog.Int("attempts", m.Attempts)) + slog.Any("from", m0.Sender()), + slog.Int("attempts", m0.Attempts)) defer func() { - deliveryResults <- formatIPDomain(m.RecipientDomain) + deliveryResults <- formatIPDomain(m0.RecipientDomain) x := recover() if x != nil { - qlog.Error("deliver panic", slog.Any("panic", x), slog.Int64("msgid", m.ID), slog.Any("recipient", m.Recipient())) + qlog.Error("deliver panic", slog.Any("panic", x), slog.Int64("msgid", m0.ID), slog.Any("recipient", m0.Recipient())) debug.PrintStack() metrics.PanicInc(metrics.Queue) } }() - // We register this attempt by setting last_attempt, and already next_attempt time - // in the future with exponential backoff. If we run into trouble delivery below, - // at least we won't be bothering the receiving server with our problems. + // We'll use a single transaction for the various checks, committing as soon as + // we're done with it. + xtx, err := DB.Begin(mox.Shutdown, true) + if err != nil { + qlog.Errorx("transaction for gathering messages to deliver", err) + return + } + defer func() { + if xtx != nil { + err := xtx.Rollback() + qlog.Check(err, "rolling back transaction after error delivering") + } + }() + + // We register this attempt by setting LastAttempt, adding an empty Result, and + // already setting NextAttempt in the future with exponential backoff. If we run + // into trouble delivery below, at least we won't be bothering the receiving server + // with our problems. // Delivery attempts: immediately, 7.5m, 15m, 30m, 1h, 2h (send delayed DSN), 4h, // 8h, 16h (send permanent failure DSN). // ../rfc/5321:3703 // todo future: make the back off times configurable. ../rfc/5321:3713 - backoff := time.Duration(7*60+30+jitter.Intn(10)-5) * time.Second - for i := 0; i < m.Attempts; i++ { - backoff *= time.Duration(2) - } - m.Attempts++ - origNextAttempt := m.NextAttempt now := time.Now() - m.LastAttempt = &now - m.NextAttempt = now.Add(backoff) - qup := bstore.QueryDB[Msg](mox.Shutdown, DB) - qup.FilterID(m.ID) - update := Msg{Attempts: m.Attempts, NextAttempt: m.NextAttempt, LastAttempt: m.LastAttempt} - if _, err := qup.UpdateNonzero(update); err != nil { - qlog.Errorx("storing delivery attempt", err, slog.Int64("msgid", m.ID), slog.Any("recipient", m.Recipient())) + var backoff time.Duration + var origNextAttempt time.Time + prepare := func() error { + // Refresh message within transaction. + m0 = Msg{ID: m0.ID} + if err := xtx.Get(&m0); err != nil { + return fmt.Errorf("get message to be delivered: %v", err) + } + + backoff = time.Duration(7*60+30+jitter.Intn(10)-5) * time.Second + for i := 0; i < m0.Attempts; i++ { + backoff *= time.Duration(2) + } + m0.Attempts++ + origNextAttempt = m0.NextAttempt + m0.LastAttempt = &now + m0.NextAttempt = now.Add(backoff) + m0.Results = append(m0.Results, MsgResult{Start: now, Error: resultErrorDelivering}) + if err := xtx.Update(&m0); err != nil { + return fmt.Errorf("update message to be delivered: %v", err) + } + return nil + } + if err := prepare(); err != nil { + qlog.Errorx("storing delivery attempt", err, slog.Int64("msgid", m0.ID), slog.Any("recipient", m0.Recipient())) return } + var remoteMTA dsn.NameIP // Zero value, will not be included in DSN. ../rfc/3464:1027 + + // Check if recipient is on suppression list. If so, fail delivery. + if m0.SenderAccount != "" { + path := smtp.Path{Localpart: m0.RecipientLocalpart, IPDomain: m0.RecipientDomain} + baseAddr := baseAddress(path).XString(true) + qsup := bstore.QueryTx[webapi.Suppression](xtx) + qsup.FilterNonzero(webapi.Suppression{Account: m0.SenderAccount, BaseAddress: baseAddr}) + exists, err := qsup.Exists() + if err != nil || exists { + if err != nil { + qlog.Errorx("checking whether recipient address is in suppression list", err) + } else { + err := fmt.Errorf("not delivering to recipient address %s: %w", path.XString(true), errSuppressed) + err = smtpclient.Error{Permanent: true, Err: err} + failMsgsTx(qlog, xtx, []*Msg{&m0}, m0.DialedIPs, backoff, remoteMTA, err) + } + err = xtx.Commit() + qlog.Check(err, "commit processing failure to deliver messages") + xtx = nil + kick() + return + } + } + resolveTransport := func(mm Msg) (string, config.Transport, bool) { if mm.Transport != "" { transport, ok := mox.Conf.Static.Transports[mm.Transport] @@ -900,12 +1403,15 @@ func deliver(log mlog.Log, resolver dns.Resolver, m Msg) { } // Find route for transport to use for delivery attempt. - m.Attempts-- - transportName, transport, transportOK := resolveTransport(m) - m.Attempts++ + m0.Attempts-- + transportName, transport, transportOK := resolveTransport(m0) + m0.Attempts++ if !transportOK { - var remoteMTA dsn.NameIP // Zero value, will not be included in DSN. ../rfc/3464:1027 - fail(ctx, qlog, []*Msg{&m}, m.DialedIPs, backoff, remoteMTA, fmt.Errorf("cannot find transport %q", m.Transport)) + failMsgsTx(qlog, xtx, []*Msg{&m0}, m0.DialedIPs, backoff, remoteMTA, fmt.Errorf("cannot find transport %q", m0.Transport)) + err = xtx.Commit() + qlog.Check(err, "commit processing failure to deliver messages") + xtx = nil + kick() return } @@ -917,18 +1423,18 @@ func deliver(log mlog.Log, resolver dns.Resolver, m Msg) { // Attempt to gather more recipients for this identical message, only with the same // recipient domain, and under the same conditions (recipientdomain, attempts, // requiretls, transport). ../rfc/5321:3759 - msgs := []*Msg{&m} - if m.BaseID != 0 { - err := DB.Write(mox.Shutdown, func(tx *bstore.Tx) error { - q := bstore.QueryTx[Msg](tx) - q.FilterNonzero(Msg{BaseID: m.BaseID, RecipientDomainStr: m.RecipientDomainStr, Attempts: m.Attempts - 1}) - q.FilterNotEqual("ID", m.ID) + msgs := []*Msg{&m0} + if m0.BaseID != 0 { + gather := func() error { + q := bstore.QueryTx[Msg](xtx) + q.FilterNonzero(Msg{BaseID: m0.BaseID, RecipientDomainStr: m0.RecipientDomainStr, Attempts: m0.Attempts - 1}) + q.FilterNotEqual("ID", m0.ID) q.FilterLessEqual("NextAttempt", origNextAttempt) q.FilterEqual("Hold", false) err := q.ForEach(func(xm Msg) error { - mrtls := m.RequireTLS != nil + mrtls := m0.RequireTLS != nil xmrtls := xm.RequireTLS != nil - if mrtls != xmrtls || mrtls && *m.RequireTLS != *xm.RequireTLS { + if mrtls != xmrtls || mrtls && *m0.RequireTLS != *xm.RequireTLS { return nil } tn, _, ok := resolveTransport(xm) @@ -944,19 +1450,27 @@ func deliver(log mlog.Log, resolver dns.Resolver, m Msg) { // Mark these additional messages as attempted too. for _, mm := range msgs[1:] { mm.Attempts++ - mm.NextAttempt = m.NextAttempt - mm.LastAttempt = m.LastAttempt - if err := tx.Update(mm); err != nil { + mm.NextAttempt = m0.NextAttempt + mm.LastAttempt = m0.LastAttempt + mm.Results = append(mm.Results, MsgResult{Start: now, Error: resultErrorDelivering}) + if err := xtx.Update(mm); err != nil { return fmt.Errorf("updating more message recipients for smtp transaction: %v", err) } } return nil - }) - if err != nil { + } + if err := gather(); err != nil { qlog.Errorx("error finding more recipients for message, will attempt to send to single recipient", err) msgs = msgs[:1] } } + + if err := xtx.Commit(); err != nil { + qlog.Errorx("commit of preparation to deliver", err, slog.Any("msgid", m0.ID)) + return + } + xtx = nil + if len(msgs) > 1 { ids := make([]int64, len(msgs)) rcpts := make([]smtp.Path, len(msgs)) @@ -966,7 +1480,121 @@ func deliver(log mlog.Log, resolver dns.Resolver, m Msg) { } qlog.Debug("delivering to multiple recipients", slog.Any("msgids", ids), slog.Any("recipients", rcpts)) } else { - qlog.Debug("delivering to single recipient", slog.Any("msgid", m.ID), slog.Any("recipient", m.Recipient())) + qlog.Debug("delivering to single recipient", slog.Any("msgid", m0.ID), slog.Any("recipient", m0.Recipient())) + } + + if Localserve { + // We are not actually going to deliver. We'll deliver to the sender account. + // Unless recipients match certain special patterns, in which case we can pretend + // to cause delivery failures. Useful for testing. + + acc, err := store.OpenAccount(log, m0.SenderAccount) + if err != nil { + log.Errorx("opening sender account for immediate delivery with localserve, skipping", err) + return + } + defer func() { + err := acc.Close() + log.Check(err, "closing account") + }() + conf, _ := acc.Conf() + + p := m0.MessagePath() + msgFile, err := os.Open(p) + if err != nil { + xerr := fmt.Errorf("open message for delivery: %v", err) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, xerr) + return + } + defer func() { + err := msgFile.Close() + qlog.Check(err, "closing message after delivery attempt") + }() + + // Parse the message for a From-address, but continue on error. + fromAddr, _, _, fromErr := message.From(qlog.Logger, false, store.FileMsgReader(m0.MsgPrefix, msgFile), nil) + log.Check(fromErr, "parsing message From header") + + for _, qm := range msgs { + code, timeout := mox.LocalserveNeedsError(qm.RecipientLocalpart) + if timeout || code != 0 { + err := errors.New("simulated error due to localserve mode and special recipient localpart") + if timeout { + err = fmt.Errorf("%s: timeout", err) + } else { + err = smtpclient.Error{Permanent: code/100 == 5, Code: code, Err: err} + } + failMsgsDB(qlog, []*Msg{qm}, m0.DialedIPs, backoff, remoteMTA, err) + continue + } + + msgFromOrgDomain := publicsuffix.Lookup(ctx, qlog.Logger, fromAddr.Domain) + + dm := store.Message{ + RemoteIP: "::1", + RemoteIPMasked1: "::", + RemoteIPMasked2: "::", + RemoteIPMasked3: "::", + MailFrom: qm.Sender().XString(true), + MailFromLocalpart: qm.SenderLocalpart, + MailFromDomain: qm.SenderDomainStr, + RcptToLocalpart: qm.RecipientLocalpart, + RcptToDomain: qm.RecipientDomainStr, + MsgFromLocalpart: fromAddr.Localpart, + MsgFromDomain: fromAddr.Domain.Name(), + MsgFromOrgDomain: msgFromOrgDomain.Name(), + EHLOValidated: true, + MailFromValidated: true, + MsgFromValidated: true, + EHLOValidation: store.ValidationPass, + MailFromValidation: store.ValidationPass, + MsgFromValidation: store.ValidationDMARC, + ReceivedRequireTLS: qm.RequireTLS != nil && *qm.RequireTLS, + Size: qm.Size, + MsgPrefix: qm.MsgPrefix, + } + var err error + var mb store.Mailbox + acc.WithWLock(func() { + dest := conf.Destinations[qm.Sender().String()] + err = acc.DeliverDestination(log, dest, &dm, msgFile) + if err != nil { + err = fmt.Errorf("delivering message: %v", err) + return // Returned again outside WithWLock. + } + + mb = store.Mailbox{ID: dm.MailboxID} + if err = acc.DB.Get(context.Background(), &mb); err != nil { + err = fmt.Errorf("getting mailbox for message after delivery: %v", err) + } + }) + if err != nil { + log.Errorx("delivering from queue to original sender account failed, skipping", err) + continue + } + log.Debug("delivered from queue to original sender account") + qm.markResult(0, "", "", true) + err = DB.Write(context.Background(), func(tx *bstore.Tx) error { + return retireMsgs(qlog, tx, webhook.EventDelivered, smtp.C250Completed, "", nil, *qm) + }) + if err != nil { + log.Errorx("removing queue message from database after local delivery to sender account", err) + } else if err := removeMsgsFS(qlog, *qm); err != nil { + log.Errorx("removing queue messages from file system after local delivery to sender account", err) + } + kick() + + // Process incoming message for incoming webhook. + mr := store.FileMsgReader(dm.MsgPrefix, msgFile) + part, err := dm.LoadPart(mr) + if err != nil { + log.Errorx("loading parsed part for evaluating webhook", err) + } else { + err = Incoming(context.Background(), log, acc, m0.MessageID, dm, part, mb.Name) + log.Check(err, "queueing webhook for incoming delivery") + } + } + return } // We gather TLS connection successes and failures during delivery, and we store @@ -985,7 +1613,7 @@ func deliver(log mlog.Log, resolver dns.Resolver, m Msg) { var recipientDomainResult tlsrpt.Result var hostResults []tlsrpt.Result defer func() { - if mox.Conf.Static.NoOutgoingTLSReports || m.RecipientDomain.IsIP() { + if mox.Conf.Static.NoOutgoingTLSReports || m0.RecipientDomain.IsIP() { return } @@ -1028,9 +1656,9 @@ func deliver(log mlog.Log, resolver dns.Resolver, m Msg) { tlsResult := tlsrptdb.TLSResult{ PolicyDomain: policyDomain.Name(), DayUTC: dayUTC, - RecipientDomain: m.RecipientDomain.Domain.Name(), + RecipientDomain: m0.RecipientDomain.Domain.Name(), IsHost: isHost, - SendReport: !m.IsTLSReport && (!m.IsDMARCReport || failure), + SendReport: !m0.IsTLSReport && (!m0.IsDMARCReport || failure), Results: []tlsrpt.Result{r}, } results = append(results, tlsResult) @@ -1065,10 +1693,10 @@ func deliver(log mlog.Log, resolver dns.Resolver, m Msg) { if transport.Socks != nil { socksdialer, err := proxy.SOCKS5("tcp", transport.Socks.Address, nil, &net.Dialer{}) if err != nil { - fail(ctx, qlog, msgs, msgs[0].DialedIPs, backoff, dsn.NameIP{}, fmt.Errorf("socks dialer: %v", err)) + failMsgsDB(qlog, msgs, msgs[0].DialedIPs, backoff, dsn.NameIP{}, fmt.Errorf("socks dialer: %v", err)) return } else if d, ok := socksdialer.(smtpclient.Dialer); !ok { - fail(ctx, qlog, msgs, msgs[0].DialedIPs, backoff, dsn.NameIP{}, fmt.Errorf("socks dialer is not a contextdialer")) + failMsgsDB(qlog, msgs, msgs[0].DialedIPs, backoff, dsn.NameIP{}, fmt.Errorf("socks dialer is not a contextdialer")) return } else { dialer = d diff --git a/queue/queue_test.go b/queue/queue_test.go index 5bd6899..e9a7710 100644 --- a/queue/queue_test.go +++ b/queue/queue_test.go @@ -8,6 +8,7 @@ import ( "crypto/sha256" "crypto/tls" "crypto/x509" + "encoding/json" "fmt" "io" "math/big" @@ -15,7 +16,9 @@ import ( "os" "path/filepath" "reflect" + "slices" "strings" + "sync" "testing" "time" @@ -30,6 +33,7 @@ import ( "github.com/mjl-/mox/store" "github.com/mjl-/mox/tlsrpt" "github.com/mjl-/mox/tlsrptdb" + "github.com/mjl-/mox/webhook" ) var ctxbg = context.Background() @@ -45,13 +49,12 @@ func tcheck(t *testing.T, err error, msg string) { func tcompare(t *testing.T, got, exp any) { t.Helper() if !reflect.DeepEqual(got, exp) { - t.Fatalf("got %v, expected %v", got, exp) + t.Fatalf("got:\n%#v\nexpected:\n%#v", got, exp) } } func setup(t *testing.T) (*store.Account, func()) { // Prepare config so email can be delivered to mjl@mox.example. - os.RemoveAll("../testdata/queue/data") log := mlog.New("queue", nil) mox.Context = ctxbg @@ -108,7 +111,7 @@ func TestQueue(t *testing.T) { } } - msgs, err := List(ctxbg, Filter{}) + msgs, err := List(ctxbg, Filter{}, Sort{}) tcheck(t, err, "listing messages in queue") if len(msgs) != 0 { t.Fatalf("got %d messages in queue, expected 0", len(msgs)) @@ -121,19 +124,19 @@ func TestQueue(t *testing.T) { var qm Msg - qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now()) + qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") err = Add(ctxbg, pkglog, "mjl", mf, qm) tcheck(t, err, "add message to queue for delivery") - qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now()) + qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") err = Add(ctxbg, pkglog, "mjl", mf, qm) tcheck(t, err, "add message to queue for delivery") - qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now()) + qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") err = Add(ctxbg, pkglog, "mjl", mf, qm) tcheck(t, err, "add message to queue for delivery") - msgs, err = List(ctxbg, Filter{}) + msgs, err = List(ctxbg, Filter{}, Sort{}) tcheck(t, err, "listing queue") if len(msgs) != 3 { t.Fatalf("got msgs %v, expected 1", msgs) @@ -173,7 +176,7 @@ func TestQueue(t *testing.T) { // Check filter through various List calls. Other code uses the same filtering function. filter := func(f Filter, expn int) { t.Helper() - l, err := List(ctxbg, f) + l, err := List(ctxbg, f, Sort{}) tcheck(t, err, "list messages") tcompare(t, len(l), expn) } @@ -223,43 +226,34 @@ func TestQueue(t *testing.T) { "other.example.": {{Host: "mail.mox.example", Pref: 10}}, }, } - // Override dial function. We'll make connecting fail for now. - dialed := make(chan struct{}, 1) + + // Try a failing delivery attempt. + var ndial int smtpclient.DialHook = func(ctx context.Context, dialer smtpclient.Dialer, timeout time.Duration, addr string, laddr net.Addr) (net.Conn, error) { - dialed <- struct{}{} + ndial++ return nil, fmt.Errorf("failure from test") } defer func() { smtpclient.DialHook = nil }() - launchWork(pkglog, resolver, map[string]struct{}{}) - - moxCert := fakeCert(t, "mail.mox.example", false) + n = launchWork(pkglog, resolver, map[string]struct{}{}) + tcompare(t, n, 1) // Wait until we see the dial and the failed attempt. timer := time.NewTimer(time.Second) defer timer.Stop() select { - case <-dialed: - i := 0 - for { - m, err := bstore.QueryDB[Msg](ctxbg, DB).Get() - tcheck(t, err, "get") - if m.Attempts == 1 { - break - } - i++ - if i == 10 { - t.Fatalf("message in queue not updated") - } - time.Sleep(100 * time.Millisecond) - } + case <-deliveryResults: + tcompare(t, ndial, 1) + m, err := bstore.QueryDB[Msg](ctxbg, DB).Get() + tcheck(t, err, "get") + tcompare(t, m.Attempts, 1) case <-timer.C: - t.Fatalf("no dial within 1s") + t.Fatalf("no delivery within 1s") } - <-deliveryResults // Deliver sends here. + // OpenMessage. _, err = OpenMessage(ctxbg, msg.ID+1) if err != bstore.ErrAbsent { t.Fatalf("OpenMessage, got %v, expected ErrAbsent", err) @@ -285,13 +279,7 @@ func TestQueue(t *testing.T) { t.Fatalf("kicked %d, expected 1", n) } - smtpdone := make(chan struct{}) - nfakeSMTPServer := func(server net.Conn, rcpts, ntx int, onercpt bool, extensions []string) { - defer func() { - smtpdone <- struct{}{} - }() - // We do a minimal fake smtp server. We cannot import smtpserver.Serve due to // cyclic dependencies. fmt.Fprintf(server, "220 mail.mox.example\r\n") @@ -345,10 +333,6 @@ func TestQueue(t *testing.T) { // Server that returns an error after first recipient. We expect another // transaction to deliver the second message. fakeSMTPServerRcpt1 := func(server net.Conn) { - defer func() { - smtpdone <- struct{}{} - }() - // We do a minimal fake smtp server. We cannot import smtpserver.Serve due to // cyclic dependencies. fmt.Fprintf(server, "220 mail.mox.example\r\n") @@ -394,14 +378,11 @@ func TestQueue(t *testing.T) { writeline("221 ok") } + moxCert := fakeCert(t, "mail.mox.example", false) goodTLSConfig := tls.Config{Certificates: []tls.Certificate{moxCert}} makeFakeSMTPSTARTTLSServer := func(tlsConfig *tls.Config, nstarttls int, requiretls bool) func(server net.Conn) { attempt := 0 return func(server net.Conn) { - defer func() { - smtpdone <- struct{}{} - }() - attempt++ // We do a minimal fake smtp server. We cannot import smtpserver.Serve due to @@ -461,10 +442,6 @@ func TestQueue(t *testing.T) { } nfakeSubmitServer := func(server net.Conn, nrcpt int) { - defer func() { - smtpdone <- struct{}{} - }() - // We do a minimal fake smtp server. We cannot import smtpserver.Serve due to // cyclic dependencies. fmt.Fprintf(server, "220 mail.mox.example\r\n") @@ -495,7 +472,7 @@ func TestQueue(t *testing.T) { nfakeSubmitServer(server, 2) } - testQueue := func(expectDSN bool, fakeServer func(conn net.Conn)) bool { + testQueue := func(expectDSN bool, fakeServer func(conn net.Conn), nresults int) (wasNetDialer bool) { t.Helper() var pipes []net.Conn @@ -505,8 +482,11 @@ func TestQueue(t *testing.T) { } }() - var wasNetDialer bool + var connMu sync.Mutex smtpclient.DialHook = func(ctx context.Context, dialer smtpclient.Dialer, timeout time.Duration, addr string, laddr net.Addr) (net.Conn, error) { + connMu.Lock() + defer connMu.Unlock() + // Setting up a pipe. We'll start a fake smtp server on the server-side. And return the // client-side to the invocation dial, for the attempted delivery from the queue. server, client := net.Pipe() @@ -515,12 +495,6 @@ func TestQueue(t *testing.T) { _, wasNetDialer = dialer.(*net.Dialer) - // For reconnects, we are already waiting for delivery below. - select { - case dialed <- struct{}{}: - default: - } - return client, nil } defer func() { @@ -533,54 +507,45 @@ func TestQueue(t *testing.T) { inboxCount, err := bstore.QueryDB[store.Message](ctxbg, acc.DB).FilterNonzero(store.Message{MailboxID: inbox.ID}).Count() tcheck(t, err, "querying messages in inbox") - waitDeliver := func() { - t.Helper() - timer.Reset(time.Second) - select { - case <-dialed: - select { - case <-smtpdone: - i := 0 - for { - xmsgs, err := List(ctxbg, Filter{}) - tcheck(t, err, "list queue") - if len(xmsgs) == 0 { - ninbox, err := bstore.QueryDB[store.Message](ctxbg, acc.DB).FilterNonzero(store.Message{MailboxID: inbox.ID}).Count() - tcheck(t, err, "querying messages in inbox") - if expectDSN && ninbox != inboxCount+1 { - t.Fatalf("got %d messages in inbox, previously %d, expected 1 additional for dsn", ninbox, inboxCount) - } else if !expectDSN && ninbox != inboxCount { - t.Fatalf("got %d messages in inbox, previously %d, expected no additional messages", ninbox, inboxCount) - } + launchWork(pkglog, resolver, map[string]struct{}{}) - break - } - i++ - if i == 10 { - t.Fatalf("%d messages in queue, expected 0", len(xmsgs)) - } - time.Sleep(100 * time.Millisecond) - } - case <-timer.C: - t.Fatalf("no deliver within 1s") - } + // Wait for all results. + timer.Reset(time.Second) + for i := 0; i < nresults; i++ { + select { + case <-deliveryResults: case <-timer.C: t.Fatalf("no dial within 1s") } - <-deliveryResults // Deliver sends here. } - launchWork(pkglog, resolver, map[string]struct{}{}) - waitDeliver() + // Check that queue is now empty. + xmsgs, err := List(ctxbg, Filter{}, Sort{}) + tcheck(t, err, "list queue") + tcompare(t, len(xmsgs), 0) + + // And that we possibly got a DSN delivered. + ninbox, err := bstore.QueryDB[store.Message](ctxbg, acc.DB).FilterNonzero(store.Message{MailboxID: inbox.ID}).Count() + tcheck(t, err, "querying messages in inbox") + if expectDSN && ninbox != inboxCount+1 { + t.Fatalf("got %d messages in inbox, previously %d, expected 1 additional for dsn", ninbox, inboxCount) + } else if !expectDSN && ninbox != inboxCount { + t.Fatalf("got %d messages in inbox, previously %d, expected no additional messages", ninbox, inboxCount) + } + return wasNetDialer } testDeliver := func(fakeServer func(conn net.Conn)) bool { t.Helper() - return testQueue(false, fakeServer) + return testQueue(false, fakeServer, 1) + } + testDeliverN := func(fakeServer func(conn net.Conn), nresults int) bool { + t.Helper() + return testQueue(false, fakeServer, nresults) } testDSN := func(fakeServer func(conn net.Conn)) bool { t.Helper() - return testQueue(true, fakeServer) + return testQueue(true, fakeServer, 1) } // Test direct delivery. @@ -591,7 +556,7 @@ func TestQueue(t *testing.T) { // Single delivery to two recipients at same domain, expecting single connection // and single transaction. - qm0 := MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now()) + qm0 := MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") qml := []Msg{qm0, qm0} // Same NextAttempt. err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add messages to queue for delivery") @@ -602,13 +567,13 @@ func TestQueue(t *testing.T) { otherpath := otheraddr.Path() t0 := time.Now() qml = []Msg{ - MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, t0), - MakeMsg(path, otherpath, false, false, int64(len(testmsg)), "", nil, nil, t0), + MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, t0, "test"), + MakeMsg(path, otherpath, false, false, int64(len(testmsg)), "", nil, nil, t0, "test"), } err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add messages to queue for delivery") conns := ConnectionCounter() - testDeliver(fakeSMTPServer) + testDeliverN(fakeSMTPServer, 2) nconns := ConnectionCounter() if nconns != conns+2 { t.Errorf("saw %d connections, expected 2", nconns-conns) @@ -630,7 +595,7 @@ func TestQueue(t *testing.T) { // Add a message to be delivered with submit because of its route. topath := smtp.Path{Localpart: "mjl", IPDomain: dns.IPDomain{Domain: dns.Domain{ASCII: "submit.example"}}} - qm = MakeMsg(path, topath, false, false, int64(len(testmsg)), "", nil, nil, time.Now()) + qm = MakeMsg(path, topath, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") err = Add(ctxbg, pkglog, "mjl", mf, qm) tcheck(t, err, "add message to queue for delivery") wasNetDialer = testDeliver(fakeSubmitServer) @@ -648,7 +613,7 @@ func TestQueue(t *testing.T) { } // Add a message to be delivered with submit because of explicitly configured transport, that uses TLS. - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") transportSubmitTLS := "submittls" @@ -697,7 +662,7 @@ func TestQueue(t *testing.T) { } // Add a message to be delivered with socks. - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") n, err = TransportSet(ctxbg, idfilter(qml[0].ID), "socks") @@ -713,7 +678,7 @@ func TestQueue(t *testing.T) { // Add message to be delivered with opportunistic TLS verification. clearTLSResults(t) - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) @@ -723,7 +688,7 @@ func TestQueue(t *testing.T) { // Test fallback to plain text with TLS handshake fails. clearTLSResults(t) - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) @@ -739,7 +704,7 @@ func TestQueue(t *testing.T) { {Usage: adns.TLSAUsageDANEEE, Selector: adns.TLSASelectorSPKI, MatchType: adns.TLSAMatchTypeFull, CertAssoc: moxCert.Leaf.RawSubjectPublicKeyInfo}, }, } - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) @@ -755,7 +720,7 @@ func TestQueue(t *testing.T) { tcompare(t, rdt.RequireTLS, true) // Add message to be delivered with verified TLS and REQUIRETLS. - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &yes, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &yes, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) @@ -768,7 +733,7 @@ func TestQueue(t *testing.T) { {}, }, } - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) @@ -785,7 +750,7 @@ func TestQueue(t *testing.T) { {Usage: adns.TLSAUsageDANEEE, Selector: adns.TLSASelectorSPKI, MatchType: adns.TLSAMatchTypeFull, CertAssoc: make([]byte, sha256.Size)}, }, } - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) @@ -802,21 +767,21 @@ func TestQueue(t *testing.T) { tcompare(t, rdt.RequireTLS, false) // Check that message is delivered with TLS-Required: No and non-matching DANE record. - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &no, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &no, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) testDeliver(fakeSMTPSTARTTLSServer) // Check that message is delivered with TLS-Required: No and bad TLS, falling back to plain text. - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &no, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &no, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) testDeliver(makeBadFakeSMTPSTARTTLSServer(true)) // Add message with requiretls that fails immediately due to no REQUIRETLS support in all servers. - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &yes, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &yes, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) @@ -827,22 +792,19 @@ func TestQueue(t *testing.T) { resolver.TLSA = nil // Add message with requiretls that fails immediately due to no verification policy for recipient domain. - qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &yes, time.Now())} + qml = []Msg{MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, &yes, time.Now(), "test")} err = Add(ctxbg, pkglog, "mjl", mf, qml...) tcheck(t, err, "add message to queue for delivery") kick(1, qml[0].ID) // Based on DNS lookups, there won't be any dialing or SMTP connection. - dialed <- struct{}{} - testDSN(func(conn net.Conn) { - smtpdone <- struct{}{} - }) + testDSN(func(conn net.Conn) {}) // Add another message that we'll fail to deliver entirely. - qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now()) + qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") err = Add(ctxbg, pkglog, "mjl", mf, qm) tcheck(t, err, "add message to queue for delivery") - msgs, err = List(ctxbg, Filter{}) + msgs, err = List(ctxbg, Filter{}, Sort{}) tcheck(t, err, "list queue") if len(msgs) != 1 { t.Fatalf("queue has %d messages, expected 1", len(msgs)) @@ -892,7 +854,6 @@ func TestQueue(t *testing.T) { defer comm.Unregister() for i := 1; i < 8; i++ { - go func() { <-deliveryResults }() // Deliver sends here. if i == 4 { resolver.AllAuthentic = true resolver.TLSA = map[string][]adns.TLSA{ @@ -905,7 +866,8 @@ func TestQueue(t *testing.T) { resolver.AllAuthentic = false resolver.TLSA = nil } - deliver(pkglog, resolver, msg) + go deliver(pkglog, resolver, msg) + <-deliveryResults err = DB.Get(ctxbg, &msg) tcheck(t, err, "get msg") if msg.Attempts != i { @@ -927,8 +889,8 @@ func TestQueue(t *testing.T) { } // Trigger final failure. - go func() { <-deliveryResults }() // Deliver sends here. - deliver(pkglog, resolver, msg) + go deliver(pkglog, resolver, msg) + <-deliveryResults err = DB.Get(ctxbg, &msg) if err != bstore.ErrAbsent { t.Fatalf("attempt to fetch delivered and removed message from queue, got err %v, expected ErrAbsent", err) @@ -945,6 +907,11 @@ func TestQueue(t *testing.T) { case <-timer.C: t.Fatalf("no dsn in 1s") } + + // We shouldn't have any more work to do. + msgs, err = List(ctxbg, Filter{}, Sort{}) + tcheck(t, err, "list messages at end of test") + tcompare(t, len(msgs), 0) } func addCounts(success, failure int64, result tlsrpt.Result) tlsrpt.Result { @@ -978,6 +945,225 @@ func checkTLSResults(t *testing.T, policyDomain, expRecipientDomain string, expI tcompare(t, result.Results, expResults) } +// Test delivered/permfailed/suppressed/canceled/dropped messages are stored in the +// retired list if configured, with a proper result, that webhooks are scheduled, +// and that cleaning up works. +func TestRetiredHooks(t *testing.T) { + _, cleanup := setup(t) + defer cleanup() + err := Init() + tcheck(t, err, "queue init") + + addr, err := smtp.ParseAddress("mjl@mox.example") + tcheck(t, err, "parse address") + path := addr.Path() + + mf := prepareFile(t) + defer os.Remove(mf.Name()) + defer mf.Close() + + resolver := dns.MockResolver{ + A: map[string][]string{"mox.example.": {"127.0.0.1"}}, + MX: map[string][]*net.MX{"mox.example.": {{Host: "mox.example", Pref: 10}}}, + } + + testAction := func(account string, action func(), expResult *MsgResult, expEvent string, expSuppressing bool) { + t.Helper() + + _, err := bstore.QueryDB[MsgRetired](ctxbg, DB).Delete() + tcheck(t, err, "clearing retired messages") + _, err = bstore.QueryDB[Hook](ctxbg, DB).Delete() + tcheck(t, err, "clearing hooks") + + qm := MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") + qm.Extra = map[string]string{"a": "123"} + err = Add(ctxbg, pkglog, account, mf, qm) + tcheck(t, err, "add to queue") + + action() + + // Should be no messages left in queue. + msgs, err := List(ctxbg, Filter{}, Sort{}) + tcheck(t, err, "list messages") + tcompare(t, len(msgs), 0) + + retireds, err := RetiredList(ctxbg, RetiredFilter{}, RetiredSort{}) + tcheck(t, err, "list retired messages") + hooks, err := HookList(ctxbg, HookFilter{}, HookSort{}) + tcheck(t, err, "list hooks") + if expResult == nil { + tcompare(t, len(retireds), 0) + tcompare(t, len(hooks), 0) + } else { + tcompare(t, len(retireds), 1) + mr := retireds[0] + tcompare(t, len(mr.Results) > 0, true) + lr := mr.LastResult() + lr.Start = time.Time{} + lr.Duration = 0 + tcompare(t, lr.Error == "", expResult.Error == "") + lr.Error = expResult.Error + tcompare(t, lr, *expResult) + + // Compare added webhook. + tcompare(t, len(hooks), 1) + h := hooks[0] + var out webhook.Outgoing + dec := json.NewDecoder(strings.NewReader(h.Payload)) + dec.DisallowUnknownFields() + err := dec.Decode(&out) + tcheck(t, err, "unmarshal outgoing webhook payload") + tcompare(t, out.Error == "", expResult.Error == "") + out.WebhookQueued = time.Time{} + out.Error = "" + var ecode string + if expResult.Secode != "" { + ecode = fmt.Sprintf("%d.%s", expResult.Code/100, expResult.Secode) + } + expOut := webhook.Outgoing{ + Event: webhook.OutgoingEvent(expEvent), + Suppressing: expSuppressing, + QueueMsgID: mr.ID, + FromID: mr.FromID, + MessageID: mr.MessageID, + Subject: mr.Subject, + SMTPCode: expResult.Code, + SMTPEnhancedCode: ecode, + Extra: mr.Extra, + } + tcompare(t, out, expOut) + h.ID = 0 + h.Payload = "" + h.Submitted = time.Time{} + h.NextAttempt = time.Time{} + exph := Hook{0, mr.ID, "", mr.MessageID, mr.Subject, mr.Extra, mr.SenderAccount, "http://localhost:1234/outgoing", "Basic dXNlcm5hbWU6cGFzc3dvcmQ=", false, expEvent, "", time.Time{}, 0, time.Time{}, nil} + tcompare(t, h, exph) + } + } + + makeLaunchAction := func(handler func(conn net.Conn)) func() { + return func() { + server, client := net.Pipe() + defer server.Close() + + smtpclient.DialHook = func(ctx context.Context, dialer smtpclient.Dialer, timeout time.Duration, addr string, laddr net.Addr) (net.Conn, error) { + go handler(server) + return client, nil + } + defer func() { + smtpclient.DialHook = nil + }() + + // Trigger delivery attempt. + n := launchWork(pkglog, resolver, map[string]struct{}{}) + tcompare(t, n, 1) + + // Wait until delivery has finished. + tm := time.NewTimer(5 * time.Second) + defer tm.Stop() + select { + case <-tm.C: + t.Fatalf("delivery didn't happen within 5s") + case <-deliveryResults: + } + } + } + + smtpAccept := func(conn net.Conn) { + br := bufio.NewReader(conn) + readline := func(cmd string) { + line, err := br.ReadString('\n') + if err == nil && !strings.HasPrefix(strings.ToLower(line), cmd) { + panic(fmt.Sprintf("unexpected line %q, expected %q", line, cmd)) + } + } + writeline := func(s string) { + fmt.Fprintf(conn, "%s\r\n", s) + } + + writeline("220 mail.mox.example") + readline("ehlo") + writeline("250 mail.mox.example") + + readline("mail") + writeline("250 ok") + readline("rcpt") + writeline("250 ok") + readline("data") + writeline("354 continue") + reader := smtp.NewDataReader(br) + io.Copy(io.Discard, reader) + writeline("250 ok") + readline("quit") + writeline("250 ok") + } + smtpReject := func(code int) func(conn net.Conn) { + return func(conn net.Conn) { + br := bufio.NewReader(conn) + readline := func(cmd string) { + line, err := br.ReadString('\n') + if err == nil && !strings.HasPrefix(strings.ToLower(line), cmd) { + panic(fmt.Sprintf("unexpected line %q, expected %q", line, cmd)) + } + } + writeline := func(s string) { + fmt.Fprintf(conn, "%s\r\n", s) + } + + writeline("220 mail.mox.example") + readline("ehlo") + writeline("250-mail.mox.example") + writeline("250 enhancedstatuscodes") + + readline("mail") + writeline(fmt.Sprintf("%d 5.1.0 nok", code)) + readline("quit") + writeline("250 ok") + } + } + + testAction("mjl", makeLaunchAction(smtpAccept), nil, "", false) + testAction("retired", makeLaunchAction(smtpAccept), &MsgResult{}, string(webhook.EventDelivered), false) + // 554 is generic, doesn't immediately cause suppression. + testAction("mjl", makeLaunchAction(smtpReject(554)), nil, "", false) + testAction("retired", makeLaunchAction(smtpReject(554)), &MsgResult{Code: 554, Secode: "1.0", Error: "nonempty"}, string(webhook.EventFailed), false) + // 550 causes immediate suppression, check for it in webhook. + testAction("mjl", makeLaunchAction(smtpReject(550)), nil, "", true) + testAction("retired", makeLaunchAction(smtpReject(550)), &MsgResult{Code: 550, Secode: "1.0", Error: "nonempty"}, string(webhook.EventFailed), true) + // Try to deliver to suppressed addresses. + launch := func() { + n := launchWork(pkglog, resolver, map[string]struct{}{}) + tcompare(t, n, 1) + <-deliveryResults + } + testAction("mjl", launch, nil, "", false) + testAction("retired", launch, &MsgResult{Error: "nonempty"}, string(webhook.EventSuppressed), false) + + queueFail := func() { + n, err := Fail(ctxbg, pkglog, Filter{}) + tcheck(t, err, "cancel delivery with failure dsn") + tcompare(t, n, 1) + } + queueDrop := func() { + n, err := Drop(ctxbg, pkglog, Filter{}) + tcheck(t, err, "cancel delivery without failure dsn") + tcompare(t, n, 1) + } + testAction("mjl", queueFail, nil, "", false) + testAction("retired", queueFail, &MsgResult{Error: "nonempty"}, string(webhook.EventFailed), false) + testAction("mjl", queueDrop, nil, "", false) + testAction("retired", queueDrop, &MsgResult{Error: "nonempty"}, string(webhook.EventCanceled), false) + + retireds, err := RetiredList(ctxbg, RetiredFilter{}, RetiredSort{}) + tcheck(t, err, "list retired messages") + tcompare(t, len(retireds), 1) + + cleanupMsgRetiredSingle(pkglog) + retireds, err = RetiredList(ctxbg, RetiredFilter{}, RetiredSort{}) + tcheck(t, err, "list retired messages") + tcompare(t, len(retireds), 0) +} + // test Start and that it attempts to deliver. func TestQueueStart(t *testing.T) { // Override dial function. We'll make connecting fail and check the attempt. @@ -996,9 +1182,14 @@ func TestQueueStart(t *testing.T) { _, cleanup := setup(t) defer cleanup() - done := make(chan struct{}, 1) + + done := make(chan struct{}, 4) defer func() { mox.ShutdownCancel() + // Wait for message and hooks deliverers and cleaners. + <-done + <-done + <-done <-done mox.Shutdown, mox.ShutdownCancel = context.WithCancel(ctxbg) }() @@ -1025,19 +1216,24 @@ func TestQueueStart(t *testing.T) { } } - // HoldRule prevents delivery. - hr, err := HoldRuleAdd(ctxbg, pkglog, HoldRule{}) + // HoldRule to mark mark all messages sent by mjl on hold, including existing + // messages. + hr0, err := HoldRuleAdd(ctxbg, pkglog, HoldRule{Account: "mjl"}) + tcheck(t, err, "add hold rule") + + // All zero HoldRule holds all deliveries, and marks all on hold. + hr1, err := HoldRuleAdd(ctxbg, pkglog, HoldRule{}) tcheck(t, err, "add hold rule") hrl, err := HoldRuleList(ctxbg) tcheck(t, err, "listing hold rules") - tcompare(t, hrl, []HoldRule{hr}) + tcompare(t, hrl, []HoldRule{hr0, hr1}) path := smtp.Path{Localpart: "mjl", IPDomain: dns.IPDomain{Domain: dns.Domain{ASCII: "mox.example"}}} mf := prepareFile(t) defer os.Remove(mf.Name()) defer mf.Close() - qm := MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now()) + qm := MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") err = Add(ctxbg, pkglog, "mjl", mf, qm) tcheck(t, err, "add message to queue for delivery") checkDialed(false) // No delivery attempt yet. @@ -1052,8 +1248,10 @@ func TestQueueStart(t *testing.T) { tcompare(t, n, 1) checkDialed(true) - // Remove hold rule. - err = HoldRuleRemove(ctxbg, pkglog, hr.ID) + // Remove hold rules. + err = HoldRuleRemove(ctxbg, pkglog, hr1.ID) + tcheck(t, err, "removing hold rule") + err = HoldRuleRemove(ctxbg, pkglog, hr0.ID) tcheck(t, err, "removing hold rule") // Check it is gone. hrl, err = HoldRuleList(ctxbg) @@ -1061,7 +1259,7 @@ func TestQueueStart(t *testing.T) { tcompare(t, len(hrl), 0) // Don't change message nextattempt time, but kick queue. Message should not be delivered. - queuekick() + msgqueueKick() checkDialed(false) // Set new next attempt, should see another attempt. @@ -1078,7 +1276,7 @@ func TestQueueStart(t *testing.T) { mf = prepareFile(t) defer os.Remove(mf.Name()) defer mf.Close() - qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now()) + qm = MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") err = Add(ctxbg, pkglog, "mjl", mf, qm) tcheck(t, err, "add message to queue for delivery") checkDialed(true) // Immediate. @@ -1086,6 +1284,256 @@ func TestQueueStart(t *testing.T) { time.Sleep(100 * time.Millisecond) // Racy... give time to finish. } +// Localserve should cause deliveries to go to sender account, with failure (DSN) +// for recipient addresses that start with "queue" and end with +// "temperror"/"permerror"/"timeout". +func TestLocalserve(t *testing.T) { + Localserve = true + defer func() { + Localserve = false + }() + + path := smtp.Path{Localpart: "mjl", IPDomain: dns.IPDomain{Domain: dns.Domain{ASCII: "mox.example"}}} + + testDeliver := func(to smtp.Path, expSuccess bool) { + t.Helper() + + _, cleanup := setup(t) + defer cleanup() + err := Init() + tcheck(t, err, "queue init") + + accret, err := store.OpenAccount(pkglog, "retired") + tcheck(t, err, "open account") + defer func() { + err := accret.Close() + tcheck(t, err, "closing account") + accret.CheckClosed() + }() + + mf := prepareFile(t) + defer os.Remove(mf.Name()) + defer mf.Close() + + // Regular message. + qm := MakeMsg(path, to, false, false, int64(len(testmsg)), "", nil, nil, time.Now(), "test") + qml := []Msg{qm} + err = Add(ctxbg, pkglog, accret.Name, mf, qml...) + tcheck(t, err, "add message to queue") + qm = qml[0] + + deliver(pkglog, nil, qm) + <-deliveryResults + // Message should be delivered to account. + n, err := bstore.QueryDB[store.Message](ctxbg, accret.DB).Count() + tcheck(t, err, "count messages in account") + tcompare(t, n, 1) + + n, err = Count(ctxbg) + tcheck(t, err, "count message queue") + tcompare(t, n, 0) + + _, err = bstore.QueryDB[MsgRetired](ctxbg, DB).Count() + tcheck(t, err, "get retired message") + + hl, err := bstore.QueryDB[Hook](ctxbg, DB).List() + tcheck(t, err, "get webhooks") + if expSuccess { + tcompare(t, len(hl), 2) + tcompare(t, hl[0].IsIncoming, false) + tcompare(t, hl[1].IsIncoming, true) + } else { + tcompare(t, len(hl), 1) + tcompare(t, hl[0].IsIncoming, false) + } + var out webhook.Outgoing + err = json.Unmarshal([]byte(hl[0].Payload), &out) + tcheck(t, err, "unmarshal outgoing webhook payload") + if expSuccess { + tcompare(t, out.Event, webhook.EventDelivered) + } else { + tcompare(t, out.Event, webhook.EventFailed) + } + } + + testDeliver(path, true) + badpath := path + badpath.Localpart = smtp.Localpart("queuepermerror") + testDeliver(badpath, false) +} + +func TestListFilterSort(t *testing.T) { + _, cleanup := setup(t) + defer cleanup() + err := Init() + tcheck(t, err, "queue init") + + // insert Msgs. insert RetiredMsgs based on that. call list with filters and sort. filter to select a single. filter to paginate one by one, and in reverse. + + path := smtp.Path{Localpart: "mjl", IPDomain: dns.IPDomain{Domain: dns.Domain{ASCII: "mox.example"}}} + mf := prepareFile(t) + defer os.Remove(mf.Name()) + defer mf.Close() + + now := time.Now().Round(0) + qm := MakeMsg(path, path, false, false, int64(len(testmsg)), "", nil, nil, now, "test") + qm.Queued = now + qm1 := qm + qm1.Queued = now.Add(-time.Second) + qm1.NextAttempt = now.Add(time.Minute) + qml := []Msg{qm, qm, qm, qm, qm, qm1} + err = Add(ctxbg, pkglog, "mjl", mf, qml...) + tcheck(t, err, "add messages to queue") + qm1 = qml[len(qml)-1] + + qmlrev := slices.Clone(qml) + slices.Reverse(qmlrev) + + // Ascending by nextattempt,id. + l, err := List(ctxbg, Filter{}, Sort{Asc: true}) + tcheck(t, err, "list messages") + tcompare(t, l, qml) + + // Descending by nextattempt,id. + l, err = List(ctxbg, Filter{}, Sort{}) + tcheck(t, err, "list messages") + tcompare(t, l, qmlrev) + + // Descending by queued,id. + l, err = List(ctxbg, Filter{}, Sort{Field: "Queued"}) + tcheck(t, err, "list messages") + ql := append(append([]Msg{}, qmlrev[1:]...), qml[5]) + tcompare(t, l, ql) + + // Filter by all fields to get a single. + no := false + allfilters := Filter{ + Max: 2, + IDs: []int64{qm1.ID}, + Account: "mjl", + From: path.XString(true), + To: path.XString(true), + Hold: &no, + Submitted: "<1s", + NextAttempt: ">1s", + } + l, err = List(ctxbg, allfilters, Sort{}) + tcheck(t, err, "list single") + tcompare(t, l, []Msg{qm1}) + + // Paginated NextAttmpt asc. + var lastID int64 + var last any + l = nil + for { + nl, err := List(ctxbg, Filter{Max: 1}, Sort{Asc: true, LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + l = append(l, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].NextAttempt.Format(time.RFC3339Nano) + } + tcompare(t, l, qml) + + // Paginated NextAttempt desc. + l = nil + lastID = 0 + last = "" + for { + nl, err := List(ctxbg, Filter{Max: 1}, Sort{LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + l = append(l, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].NextAttempt.Format(time.RFC3339Nano) + } + tcompare(t, l, qmlrev) + + // Paginated Queued desc. + l = nil + lastID = 0 + last = "" + for { + nl, err := List(ctxbg, Filter{Max: 1}, Sort{Field: "Queued", LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + l = append(l, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].Queued.Format(time.RFC3339Nano) + } + tcompare(t, l, ql) + + // Paginated Queued asc. + l = nil + lastID = 0 + last = "" + for { + nl, err := List(ctxbg, Filter{Max: 1}, Sort{Field: "Queued", Asc: true, LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + l = append(l, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].Queued.Format(time.RFC3339Nano) + } + qlrev := slices.Clone(ql) + slices.Reverse(qlrev) + tcompare(t, l, qlrev) + + // Retire messages and do similar but more basic tests. The code is similar. + var mrl []MsgRetired + err = DB.Write(ctxbg, func(tx *bstore.Tx) error { + for _, m := range qml { + mr := m.Retired(false, m.NextAttempt, time.Now().Add(time.Minute).Round(0)) + err := tx.Insert(&mr) + tcheck(t, err, "inserting retired message") + mrl = append(mrl, mr) + } + return nil + }) + tcheck(t, err, "adding retired messages") + + // Paginated LastActivity desc. + var lr []MsgRetired + lastID = 0 + last = "" + l = nil + for { + nl, err := RetiredList(ctxbg, RetiredFilter{Max: 1}, RetiredSort{LastID: lastID, Last: last}) + tcheck(t, err, "list paginated") + lr = append(lr, nl...) + if len(nl) == 0 { + break + } + tcompare(t, len(nl), 1) + lastID, last = nl[0].ID, nl[0].LastActivity.Format(time.RFC3339Nano) + } + mrlrev := slices.Clone(mrl) + slices.Reverse(mrlrev) + tcompare(t, lr, mrlrev) + + // Filter by all fields to get a single. + allretiredfilters := RetiredFilter{ + Max: 2, + IDs: []int64{mrlrev[0].ID}, + Account: "mjl", + From: path.XString(true), + To: path.XString(true), + Submitted: "<1s", + LastActivity: ">1s", + } + lr, err = RetiredList(ctxbg, allretiredfilters, RetiredSort{}) + tcheck(t, err, "list single") + tcompare(t, lr, []MsgRetired{mrlrev[0]}) +} + // Just a cert that appears valid. func fakeCert(t *testing.T, name string, expired bool) tls.Certificate { notAfter := time.Now() diff --git a/queue/submit.go b/queue/submit.go index 25b1c5d..e264b46 100644 --- a/queue/submit.go +++ b/queue/submit.go @@ -13,6 +13,8 @@ import ( "slices" "time" + "github.com/mjl-/bstore" + "github.com/mjl-/mox/config" "github.com/mjl-/mox/dns" "github.com/mjl-/mox/dsn" @@ -22,6 +24,7 @@ import ( "github.com/mjl-/mox/smtp" "github.com/mjl-/mox/smtpclient" "github.com/mjl-/mox/store" + "github.com/mjl-/mox/webhook" ) // todo: reuse connection? do fewer concurrently (other than with direct delivery). @@ -91,7 +94,7 @@ func deliverSubmit(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale Secode: smtp.SePol7MissingReqTLS30, Err: fmt.Errorf("transport %s: message requires verified tls but transport does not verify tls", transportName), } - fail(ctx, qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, submiterr) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, submiterr) return } @@ -126,7 +129,7 @@ func deliverSubmit(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale } qlog.Errorx("dialing for submission", err, slog.String("remote", addr)) submiterr = fmt.Errorf("transport %s: dialing %s for submission: %w", transportName, addr, err) - fail(ctx, qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, submiterr) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, submiterr) return } dialcancel() @@ -183,7 +186,7 @@ func deliverSubmit(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale submiterr = smtperr } qlog.Errorx("establishing smtp session for submission", submiterr, slog.String("remote", addr)) - fail(ctx, qlog, msgs, m0.DialedIPs, backoff, remoteMTA, submiterr) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, remoteMTA, submiterr) return } defer func() { @@ -208,7 +211,7 @@ func deliverSubmit(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale if err != nil { qlog.Errorx("opening message for delivery", err, slog.String("remote", addr), slog.String("path", p)) submiterr = fmt.Errorf("transport %s: opening message file for submission: %w", transportName, err) - fail(ctx, qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, submiterr) + failMsgsDB(qlog, msgs, m0.DialedIPs, backoff, dsn.NameIP{}, submiterr) return } msgr = store.FileMsgReader(m0.MsgPrefix, f) @@ -229,7 +232,7 @@ func deliverSubmit(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale qlog.Infox("smtp transaction for delivery failed", submiterr) } failed = 0 // Reset, we are looking at the SMTP results below. - var delIDs []int64 + var delMsgs []Msg for i, m := range msgs { qmlog := qlog.With( slog.Int64("msgid", m.ID), @@ -251,17 +254,24 @@ func deliverSubmit(qlog mlog.Log, resolver dns.Resolver, dialer smtpclient.Diale err = smtperr } qmlog.Errorx("submitting message", err, slog.String("remote", addr)) - fail(ctx, qmlog, []*Msg{m}, m0.DialedIPs, backoff, remoteMTA, err) + failMsgsDB(qmlog, []*Msg{m}, m0.DialedIPs, backoff, remoteMTA, err) failed++ } else { - delIDs = append(delIDs, m.ID) + m.markResult(0, "", "", true) + delMsgs = append(delMsgs, *m) qmlog.Info("delivered from queue with transport") delivered++ } } - if len(delIDs) > 0 { - if err := queueDelete(context.Background(), delIDs...); err != nil { - qlog.Errorx("deleting message from queue after delivery", err) + if len(delMsgs) > 0 { + err := DB.Write(context.Background(), func(tx *bstore.Tx) error { + return retireMsgs(qlog, tx, webhook.EventDelivered, 0, "", nil, delMsgs...) + }) + if err != nil { + qlog.Errorx("remove queue message from database after delivery", err) + } else if err := removeMsgsFS(qlog, delMsgs...); err != nil { + qlog.Errorx("remove queue message from file system after delivery", err) } + kick() } } diff --git a/queue/suppression.go b/queue/suppression.go new file mode 100644 index 0000000..c7de138 --- /dev/null +++ b/queue/suppression.go @@ -0,0 +1,170 @@ +package queue + +import ( + "context" + "errors" + "fmt" + "log/slog" + "strings" + + "github.com/mjl-/bstore" + + "github.com/mjl-/mox/mlog" + "github.com/mjl-/mox/smtp" + "github.com/mjl-/mox/webapi" +) + +// todo: we should be processing spam complaints and add addresses to the list. + +var errSuppressed = errors.New("address is on suppression list") + +func baseAddress(a smtp.Path) smtp.Path { + s := string(a.Localpart) + s, _, _ = strings.Cut(s, "+") + s, _, _ = strings.Cut(s, "-") + s = strings.ReplaceAll(s, ".", "") + s = strings.ToLower(s) + return smtp.Path{Localpart: smtp.Localpart(s), IPDomain: a.IPDomain} +} + +// SuppressionList returns suppression. If account is not empty, only suppression +// for that account are returned. +// +// SuppressionList does not check if an account exists. +func SuppressionList(ctx context.Context, account string) ([]webapi.Suppression, error) { + q := bstore.QueryDB[webapi.Suppression](ctx, DB) + if account != "" { + q.FilterNonzero(webapi.Suppression{Account: account}) + } + return q.List() +} + +// SuppressionLookup looks up a suppression for an address for an account. Returns +// a nil suppression if not found. +// +// SuppressionLookup does not check if an account exists. +func SuppressionLookup(ctx context.Context, account string, address smtp.Path) (*webapi.Suppression, error) { + baseAddr := baseAddress(address).XString(true) + q := bstore.QueryDB[webapi.Suppression](ctx, DB) + q.FilterNonzero(webapi.Suppression{Account: account, BaseAddress: baseAddr}) + sup, err := q.Get() + if err == bstore.ErrAbsent { + return nil, nil + } + return &sup, err +} + +// SuppressionAdd adds a suppression for an address for an account, setting +// BaseAddress based on OriginalAddress. +// +// If the base address of original address is already present, an error is +// returned (such as from bstore). +// +// SuppressionAdd does not check if an account exists. +func SuppressionAdd(ctx context.Context, originalAddress smtp.Path, sup *webapi.Suppression) error { + sup.BaseAddress = baseAddress(originalAddress).XString(true) + sup.OriginalAddress = originalAddress.XString(true) + return DB.Insert(ctx, sup) +} + +// SuppressionRemove removes a suppression. The base address for the the given +// address is removed. +// +// SuppressionRemove does not check if an account exists. +func SuppressionRemove(ctx context.Context, account string, address smtp.Path) error { + baseAddr := baseAddress(address).XString(true) + q := bstore.QueryDB[webapi.Suppression](ctx, DB) + q.FilterNonzero(webapi.Suppression{Account: account, BaseAddress: baseAddr}) + n, err := q.Delete() + if err != nil { + return err + } + if n == 0 { + return bstore.ErrAbsent + } + return nil +} + +type suppressionCheck struct { + MsgID int64 + Account string + Recipient smtp.Path + Code int + Secode string + Source string +} + +// process failures, possibly creating suppressions. +func suppressionProcess(log mlog.Log, tx *bstore.Tx, scl ...suppressionCheck) (suppressedMsgIDs []int64, err error) { + for _, sc := range scl { + xlog := log.With(slog.Any("suppressioncheck", sc)) + baseAddr := baseAddress(sc.Recipient).XString(true) + exists, err := bstore.QueryTx[webapi.Suppression](tx).FilterNonzero(webapi.Suppression{Account: sc.Account, BaseAddress: baseAddr}).Exists() + if err != nil { + return nil, fmt.Errorf("checking if address is in suppression list: %v", err) + } else if exists { + xlog.Debug("address already in suppression list") + continue + } + + origAddr := sc.Recipient.XString(true) + sup := webapi.Suppression{ + Account: sc.Account, + BaseAddress: baseAddr, + OriginalAddress: origAddr, + } + + if isImmedateBlock(sc.Code, sc.Secode) { + sup.Reason = fmt.Sprintf("delivery failure from %s with smtp code %d, enhanced code %q", sc.Source, sc.Code, sc.Secode) + } else { + // If two most recent deliveries failed (excluding this one, so three most recent + // messages including this one), we'll add the address to the list. + q := bstore.QueryTx[MsgRetired](tx) + q.FilterNonzero(MsgRetired{RecipientAddress: origAddr}) + q.FilterNotEqual("ID", sc.MsgID) + q.SortDesc("LastActivity") + q.Limit(2) + l, err := q.List() + if err != nil { + xlog.Errorx("checking for previous delivery failures", err) + continue + } + if len(l) < 2 || l[0].Success || l[1].Success { + continue + } + sup.Reason = fmt.Sprintf("delivery failure from %s and three consecutive failures", sc.Source) + } + if err := tx.Insert(&sup); err != nil { + return nil, fmt.Errorf("inserting suppression: %v", err) + } + suppressedMsgIDs = append(suppressedMsgIDs, sc.MsgID) + } + return suppressedMsgIDs, nil +} + +// Decide whether an SMTP code and short enhanced code is a reason for an +// immediate suppression listing. For some errors, we don't want to bother the +// remote mail server again, or they may decide our behaviour looks spammy. +func isImmedateBlock(code int, secode string) bool { + switch code { + case smtp.C521HostNoMail, // Host is not interested in accepting email at all. + smtp.C550MailboxUnavail, // Likely mailbox does not exist. + smtp.C551UserNotLocal, // Also not interested in accepting email for this address. + smtp.C553BadMailbox, // We are sending a mailbox name that server doesn't understand and won't accept email for. + smtp.C556DomainNoMail: // Remote is not going to accept email for this address/domain. + return true + } + if code/100 != 5 { + return false + } + switch secode { + case smtp.SeAddr1UnknownDestMailbox1, // Recipient localpart doesn't exist. + smtp.SeAddr1UnknownSystem2, // Bad recipient domain. + smtp.SeAddr1MailboxSyntax3, // Remote doesn't understand syntax. + smtp.SeAddr1DestMailboxMoved6, // Address no longer exists. + smtp.SeMailbox2Disabled1, // Account exists at remote, but is disabled. + smtp.SePol7DeliveryUnauth1: // Seems popular for saying we are on a blocklist. + return true + } + return false +} diff --git a/queue/suppression_test.go b/queue/suppression_test.go new file mode 100644 index 0000000..2ac52d8 --- /dev/null +++ b/queue/suppression_test.go @@ -0,0 +1,107 @@ +package queue + +import ( + "testing" + + "github.com/mjl-/mox/smtp" + "github.com/mjl-/mox/webapi" +) + +func TestSuppression(t *testing.T) { + _, cleanup := setup(t) + defer cleanup() + err := Init() + tcheck(t, err, "queue init") + + l, err := SuppressionList(ctxbg, "bogus") + tcheck(t, err, "listing suppressions for unknown account") + tcompare(t, len(l), 0) + + l, err = SuppressionList(ctxbg, "") // All + tcheck(t, err, "list suppression for all accounts") + tcompare(t, len(l), 0) // None yet. + + addr1, err := smtp.ParseAddress("mjl@mox.example") + tcheck(t, err, "parse address") + path1 := addr1.Path() + addr2, err := smtp.ParseAddress("mjl2@mox.example") + tcheck(t, err, "parse address") + path2 := addr2.Path() + addr2b, err := smtp.ParseAddress("M.j.l2+catchall@Mox.example") + tcheck(t, err, "parse address") + path2b := addr2b.Path() + + // No suppression yet. + sup, err := SuppressionLookup(ctxbg, "mjl", path1) + tcheck(t, err, "lookup suppression") + tcompare(t, sup == nil, true) + + // No error if account does not exist. + sup, err = SuppressionLookup(ctxbg, "bogus", path1) + tcompare(t, err == nil, true) + tcompare(t, sup == nil, true) + + // Can add a suppression once. + err = SuppressionAdd(ctxbg, path1, &webapi.Suppression{Account: "mjl"}) + tcheck(t, err, "add suppression") + // No duplicates. + err = SuppressionAdd(ctxbg, path1, &webapi.Suppression{Account: "mjl"}) + tcompare(t, err == nil, false) + // Account must be set in Suppresion. + err = SuppressionAdd(ctxbg, path1, &webapi.Suppression{}) + tcompare(t, err == nil, false) + + // Duplicate check is done after making base address. + err = SuppressionAdd(ctxbg, path2, &webapi.Suppression{Account: "retired"}) + tcheck(t, err, "add suppression") + err = SuppressionAdd(ctxbg, path2b, &webapi.Suppression{Account: "retired"}) + tcompare(t, err == nil, false) // Duplicate. + + l, err = SuppressionList(ctxbg, "") // All + tcheck(t, err, "list suppression for all accounts") + tcompare(t, len(l), 2) + l, err = SuppressionList(ctxbg, "mjl") + tcheck(t, err, "list suppression for mjl") + tcompare(t, len(l), 1) + + // path1 is listed for mjl. + sup, err = SuppressionLookup(ctxbg, "mjl", path1) + tcheck(t, err, "lookup") + tcompare(t, sup == nil, false) + + // Accounts don't influence each other. + sup, err = SuppressionLookup(ctxbg, "mjl", path2) + tcheck(t, err, "lookup") + tcompare(t, sup == nil, true) + + // Simplified address is present. + sup, err = SuppressionLookup(ctxbg, "retired", path2) + tcheck(t, err, "lookup") + tcompare(t, sup == nil, false) + + // Original address is also present. + sup, err = SuppressionLookup(ctxbg, "retired", path2b) + tcheck(t, err, "lookup") + tcompare(t, sup == nil, false) + + // Can remove again. + err = SuppressionRemove(ctxbg, "mjl", path1) + tcheck(t, err, "remove") + // But not twice. + err = SuppressionRemove(ctxbg, "mjl", path1) + tcompare(t, err == nil, false) + // No longer present. + sup, err = SuppressionLookup(ctxbg, "mjl", path1) + tcheck(t, err, "lookup") + tcompare(t, sup == nil, true) + + // Can remove for any form of the address, was added as path2b. + err = SuppressionRemove(ctxbg, "retired", path2b) + tcheck(t, err, "lookup") + + // Account names are not validated. + err = SuppressionAdd(ctxbg, path1, &webapi.Suppression{Account: "bogus"}) + tcheck(t, err, "add suppression") + err = SuppressionRemove(ctxbg, "bogus", path1) + tcheck(t, err, "remove suppression") +} diff --git a/quickstart.go b/quickstart.go index 3748eac..1bf4e69 100644 --- a/quickstart.go +++ b/quickstart.go @@ -90,8 +90,8 @@ domains with HTTP/HTTPS, including with automatic TLS with ACME, is easily configured through both configuration files and admin web interface, and can act as a reverse proxy (and static file server for that matter), so you can forward traffic to your existing backend applications. Look for "WebHandlers:" in the -output of "mox config describe-domains" and see the output of "mox example -webhandlers". +output of "mox config describe-domains" and see the output of +"mox config example webhandlers". ` var existingWebserver bool var hostname string @@ -563,7 +563,8 @@ WARNING: Could not verify outgoing smtp connections can be made, outgoing delivery may not be working. Many providers block outgoing smtp connections by default, requiring an explicit request or a cooldown period before allowing outgoing smtp connections. To send through a smarthost, configure a "Transport" -in mox.conf and use it in "Routes" in domains.conf. See "mox example transport". +in mox.conf and use it in "Routes" in domains.conf. See +"mox config example transport". `) } @@ -774,6 +775,7 @@ and check the admin page for the needed DNS records.`) internal.AccountHTTP.Enabled = true internal.AdminHTTP.Enabled = true internal.WebmailHTTP.Enabled = true + internal.WebAPIHTTP.Enabled = true internal.MetricsHTTP.Enabled = true if existingWebserver { internal.AccountHTTP.Port = 1080 @@ -782,6 +784,8 @@ and check the admin page for the needed DNS records.`) internal.AdminHTTP.Forwarded = true internal.WebmailHTTP.Port = 1080 internal.WebmailHTTP.Forwarded = true + internal.WebAPIHTTP.Port = 1080 + internal.WebAPIHTTP.Forwarded = true internal.AutoconfigHTTPS.Enabled = true internal.AutoconfigHTTPS.Port = 81 internal.AutoconfigHTTPS.NonTLS = true diff --git a/serve.go b/serve.go index 1457a5e..2c3a33b 100644 --- a/serve.go +++ b/serve.go @@ -78,7 +78,7 @@ func start(mtastsdbRefresher, sendDMARCReports, sendTLSReports, skipForkExec boo return fmt.Errorf("tlsrpt init: %s", err) } - done := make(chan struct{}, 1) + done := make(chan struct{}, 4) // Goroutines for messages and webhooks, and cleaners. if err := queue.Start(dns.StrictResolver{Pkg: "queue"}, done); err != nil { return fmt.Errorf("queue start: %s", err) } diff --git a/smtpserver/dsn.go b/smtpserver/dsn.go index 7307971..4c2dcce 100644 --- a/smtpserver/dsn.go +++ b/smtpserver/dsn.go @@ -54,7 +54,7 @@ func queueDSN(ctx context.Context, log mlog.Log, c *conn, rcptTo smtp.Path, m ds if requireTLS { reqTLS = &requireTLS } - qm := queue.MakeMsg(smtp.Path{}, rcptTo, has8bit, smtputf8, int64(len(buf)), m.MessageID, nil, reqTLS, time.Now()) + qm := queue.MakeMsg(smtp.Path{}, rcptTo, has8bit, smtputf8, int64(len(buf)), m.MessageID, nil, reqTLS, time.Now(), m.Subject) qm.DSNUTF8 = bufUTF8 if err := queue.Add(ctx, c.log, "", f, qm); err != nil { return err diff --git a/smtpserver/server.go b/smtpserver/server.go index 3c9cd4c..f9ddcbb 100644 --- a/smtpserver/server.go +++ b/smtpserver/server.go @@ -6,6 +6,7 @@ import ( "bytes" "context" "crypto/md5" + cryptorand "crypto/rand" "crypto/rsa" "crypto/sha1" "crypto/sha256" @@ -21,8 +22,8 @@ import ( "net/textproto" "os" "runtime/debug" + "slices" "sort" - "strconv" "strings" "sync" "time" @@ -150,7 +151,7 @@ var ( "reason", }, ) - // Similar between ../webmail/webmail.go:/metricSubmission and ../smtpserver/server.go:/metricSubmission + // Similar between ../webmail/webmail.go:/metricSubmission and ../smtpserver/server.go:/metricSubmission and ../webapisrv/server.go:/metricSubmission metricSubmission = promauto.NewCounterVec( prometheus.CounterOpts{ Name: "mox_smtpserver_submission_total", @@ -1944,7 +1945,7 @@ func hasTLSRequiredNo(h textproto.MIMEHeader) bool { // submit is used for mail from authenticated users that we will try to deliver. func (c *conn) submit(ctx context.Context, recvHdrFor func(string) string, msgWriter *message.Writer, dataFile *os.File, part *message.Part) { - // Similar between ../smtpserver/server.go:/submit\( and ../webmail/webmail.go:/MessageSubmit\( + // Similar between ../smtpserver/server.go:/submit\( and ../webmail/api.go:/MessageSubmit\( and ../webapisrv/server.go:/Send\( var msgPrefix []byte @@ -2017,6 +2018,26 @@ func (c *conn) submit(ctx context.Context, recvHdrFor func(string) string, msgWr }) xcheckf(err, "read-only transaction") + // We gather any X-Mox-Extra-* headers into the "extra" data during queueing, which + // will make it into any webhook we deliver. + // todo: remove the X-Mox-Extra-* headers from the message. we don't currently rewrite the message... + // todo: should we not canonicalize keys? + var extra map[string]string + for k, vl := range header { + if !strings.HasPrefix(k, "X-Mox-Extra-") { + continue + } + if extra == nil { + extra = map[string]string{} + } + xk := k[len("X-Mox-Extra-"):] + // We don't allow duplicate keys. + if _, ok := extra[xk]; ok || len(vl) > 1 { + xsmtpUserErrorf(smtp.C554TransactionFailed, smtp.SeMsg6Other0, "duplicate x-mox-extra- key %q", xk) + } + extra[xk] = vl[len(vl)-1] + } + // todo future: in a pedantic mode, we can parse the headers, and return an error if rcpt is only in To or Cc header, and not in the non-empty Bcc header. indicates a client that doesn't blind those bcc's. // Add DKIM signatures. @@ -2054,13 +2075,23 @@ func (c *conn) submit(ctx context.Context, recvHdrFor func(string) string, msgWr msgPrefix = append(msgPrefix, []byte(authResults.Header())...) // We always deliver through the queue. It would be more efficient to deliver - // directly, but we don't want to circumvent all the anti-spam measures. Accounts - // on a single mox instance should be allowed to block each other. + // directly for local accounts, but we don't want to circumvent all the anti-spam + // measures. Accounts on a single mox instance should be allowed to block each + // other. + + accConf, _ := c.account.Conf() + loginAddr, err := smtp.ParseAddress(c.username) + xcheckf(err, "parsing login address") + useFromID := slices.Contains(accConf.ParsedFromIDLoginAddresses, loginAddr) + var localpartBase string + if useFromID { + localpartBase = strings.SplitN(string(c.mailFrom.Localpart), confDom.LocalpartCatchallSeparator, 2)[0] + } now := time.Now() qml := make([]queue.Msg, len(c.recipients)) for i, rcptAcc := range c.recipients { if Localserve { - code, timeout := localserveNeedsError(rcptAcc.rcptTo.Localpart) + code, timeout := mox.LocalserveNeedsError(rcptAcc.rcptTo.Localpart) if timeout { c.log.Info("timing out submission due to special localpart") mox.Sleep(mox.Context, time.Hour) @@ -2071,6 +2102,13 @@ func (c *conn) submit(ctx context.Context, recvHdrFor func(string) string, msgWr } } + fp := *c.mailFrom + var fromID string + if useFromID { + fromID = xrandomID(16) + fp.Localpart = smtp.Localpart(localpartBase + confDom.LocalpartCatchallSeparator + fromID) + } + // For multiple recipients, we don't make each message prefix unique, leaving out // the "for" clause in the Received header. This allows the queue to deliver the // messages in a single smtp transaction. @@ -2080,11 +2118,13 @@ func (c *conn) submit(ctx context.Context, recvHdrFor func(string) string, msgWr } xmsgPrefix := append([]byte(recvHdrFor(rcptTo)), msgPrefix...) msgSize := int64(len(xmsgPrefix)) + msgWriter.Size - qm := queue.MakeMsg(*c.mailFrom, rcptAcc.rcptTo, msgWriter.Has8bit, c.msgsmtputf8, msgSize, messageID, xmsgPrefix, c.requireTLS, now) + qm := queue.MakeMsg(fp, rcptAcc.rcptTo, msgWriter.Has8bit, c.msgsmtputf8, msgSize, messageID, xmsgPrefix, c.requireTLS, now, header.Get("Subject")) if !c.futureRelease.IsZero() { qm.NextAttempt = c.futureRelease qm.FutureReleaseRequest = c.futureReleaseRequest } + qm.FromID = fromID + qm.Extra = extra qml[i] = qm } @@ -2124,6 +2164,20 @@ func (c *conn) submit(ctx context.Context, recvHdrFor func(string) string, msgWr c.writecodeline(smtp.C250Completed, smtp.SeMailbox2Other0, "it is done", nil) } +func xrandomID(n int) string { + return base64.RawURLEncoding.EncodeToString(xrandom(n)) +} + +func xrandom(n int) []byte { + buf := make([]byte, n) + x, err := cryptorand.Read(buf) + xcheckf(err, "read random") + if x != n { + xcheckf(errors.New("short random read"), "read random") + } + return buf +} + func ipmasked(ip net.IP) (string, string, string) { if ip.To4() != nil { m1 := ip.String() @@ -2137,31 +2191,8 @@ func ipmasked(ip net.IP) (string, string, string) { return m1, m2, m3 } -func localserveNeedsError(lp smtp.Localpart) (code int, timeout bool) { - s := string(lp) - if strings.HasSuffix(s, "temperror") { - return smtp.C451LocalErr, false - } else if strings.HasSuffix(s, "permerror") { - return smtp.C550MailboxUnavail, false - } else if strings.HasSuffix(s, "timeout") { - return 0, true - } - if len(s) < 3 { - return 0, false - } - s = s[len(s)-3:] - v, err := strconv.ParseInt(s, 10, 32) - if err != nil { - return 0, false - } - if v < 400 || v > 600 { - return 0, false - } - return int(v), false -} - func (c *conn) xlocalserveError(lp smtp.Localpart) { - code, timeout := localserveNeedsError(lp) + code, timeout := mox.LocalserveNeedsError(lp) if timeout { c.log.Info("timing out due to special localpart") mox.Sleep(mox.Context, time.Hour) @@ -2178,7 +2209,16 @@ func (c *conn) xlocalserveError(lp smtp.Localpart) { func (c *conn) deliver(ctx context.Context, recvHdrFor func(string) string, msgWriter *message.Writer, iprevStatus iprev.Status, iprevAuthentic bool, dataFile *os.File) { // todo: in decision making process, if we run into (some) temporary errors, attempt to continue. if we decide to accept, all good. if we decide to reject, we'll make it a temporary reject. - msgFrom, envelope, headers, err := message.From(c.log.Logger, false, dataFile, nil) + var msgFrom smtp.Address + var envelope *message.Envelope + var headers textproto.MIMEHeader + var isDSN bool + part, err := message.Parse(c.log.Logger, false, dataFile) + if err == nil { + // todo: is it enough to check only the the content-type header? in other places we look at the content-types of the parts before considering a message a dsn. should we change other places to this simpler check? + isDSN = part.MediaType == "MULTIPART" && part.MediaSubType == "REPORT" && strings.EqualFold(part.ContentTypeParams["report-type"], "delivery-status") + msgFrom, envelope, headers, err = message.From(c.log.Logger, false, dataFile, &part) + } if err != nil { c.log.Infox("parsing message for From address", err) } @@ -2676,6 +2716,7 @@ func (c *conn) deliver(ctx context.Context, recvHdrFor func(string) string, msgW MailFromValidation: mailFromValidation, MsgFromValidation: msgFromValidation, DKIMDomains: verifiedDKIMDomains, + DSN: isDSN, Size: msgWriter.Size, } if c.tls { @@ -2960,8 +3001,8 @@ func (c *conn) deliver(ctx context.Context, recvHdrFor func(string) string, msgW } } - if Localserve { - code, timeout := localserveNeedsError(rcptAcc.rcptTo.Localpart) + if Localserve && !strings.HasPrefix(string(rcptAcc.rcptTo.Localpart), "queue") { + code, timeout := mox.LocalserveNeedsError(rcptAcc.rcptTo.Localpart) if timeout { log.Info("timing out due to special localpart") mox.Sleep(mox.Context, time.Hour) @@ -2972,6 +3013,7 @@ func (c *conn) deliver(ctx context.Context, recvHdrFor func(string) string, msgW addError(rcptAcc, code, smtp.SeOther00, false, fmt.Sprintf("failure with code %d due to special localpart", code)) } } + var delivered bool acc.WithWLock(func() { if err := acc.DeliverMailbox(log, a.mailbox, &m, dataFile); err != nil { log.Errorx("delivering", err) @@ -2983,6 +3025,7 @@ func (c *conn) deliver(ctx context.Context, recvHdrFor func(string) string, msgW } return } + delivered = true metricDelivery.WithLabelValues("delivered", a.reason).Inc() log.Info("incoming message delivered", slog.String("reason", a.reason), slog.Any("msgfrom", msgFrom)) @@ -2994,6 +3037,18 @@ func (c *conn) deliver(ctx context.Context, recvHdrFor func(string) string, msgW } }) + // Pass delivered messages to queue for DSN processing and/or hooks. + if delivered { + mr := store.FileMsgReader(m.MsgPrefix, dataFile) + part, err := m.LoadPart(mr) + if err != nil { + log.Errorx("loading parsed part for evaluating webhook", err) + } else { + err = queue.Incoming(context.Background(), log, acc, messageID, m, part, a.mailbox) + log.Check(err, "queueing webhook for incoming delivery") + } + } + err = acc.Close() log.Check(err, "closing account after delivering") acc = nil diff --git a/smtpserver/server_test.go b/smtpserver/server_test.go index 9b201e6..ae3b33d 100644 --- a/smtpserver/server_test.go +++ b/smtpserver/server_test.go @@ -143,6 +143,7 @@ func (ts *testserver) close() { } func (ts *testserver) run(fn func(helloErr error, client *smtpclient.Client)) { + ts.t.Helper() ts.runRaw(func(conn net.Conn) { ts.t.Helper() @@ -1443,7 +1444,7 @@ test email } tcheck(t, err, "deliver") - msgs, err := queue.List(ctxbg, queue.Filter{}) + msgs, err := queue.List(ctxbg, queue.Filter{}, queue.Sort{}) tcheck(t, err, "listing queue") n++ tcompare(t, len(msgs), n) @@ -1592,7 +1593,7 @@ test email } tcheck(t, err, "deliver") - msgs, err := queue.List(ctxbg, queue.Filter{}) + msgs, err := queue.List(ctxbg, queue.Filter{}, queue.Sort{}) tcheck(t, err, "listing queue") tcompare(t, len(msgs), 1) tcompare(t, msgs[0].RequireTLS, expRequireTLS) @@ -1808,8 +1809,8 @@ QW4gYXR0YWNoZWQgdGV4dCBmaWxlLg== return } - msgs, _ := queue.List(ctxbg, queue.Filter{}) - queuedMsg := msgs[len(msgs)-1] + msgs, _ := queue.List(ctxbg, queue.Filter{}, queue.Sort{Field: "Queued", Asc: false}) + queuedMsg := msgs[0] if queuedMsg.SMTPUTF8 != expectedSmtputf8 { t.Fatalf("[%s / %s / %s / %s] got SMTPUTF8 %t, expected %t", mailFrom, rcptTo, headerValue, filename, queuedMsg.SMTPUTF8, expectedSmtputf8) } @@ -1828,3 +1829,79 @@ QW4gYXR0YWNoZWQgdGV4dCBmaWxlLg== test(`Ω@mox.example`, `🙂@example.org`, "header-utf8-😍", "utf8-🫠️.txt", true, true, nil) test(`mjl@mox.example`, `remote@xn--vg8h.example.org`, "header-ascii", "ascii.txt", true, false, nil) } + +// TestExtra checks whether submission of messages with "X-Mox-Extra-: value" +// headers cause those those key/value pairs to be added to the Extra field in the +// queue. +func TestExtra(t *testing.T) { + ts := newTestServer(t, filepath.FromSlash("../testdata/smtp/mox.conf"), dns.MockResolver{}) + defer ts.close() + + ts.user = "mjl@mox.example" + ts.pass = password0 + ts.submission = true + + extraMsg := strings.ReplaceAll(`From: +To: +Subject: test +X-Mox-Extra-Test: testvalue +X-Mox-Extra-a: 123 +X-Mox-Extra-☺: ☹ +X-Mox-Extra-x-cANONICAL-z: ok +Message-Id: + +test email +`, "\n", "\r\n") + + ts.run(func(err error, client *smtpclient.Client) { + t.Helper() + tcheck(t, err, "init client") + mailFrom := "mjl@mox.example" + rcptTo := "mjl@mox.example" + err = client.Deliver(ctxbg, mailFrom, rcptTo, int64(len(extraMsg)), strings.NewReader(extraMsg), true, true, false) + tcheck(t, err, "deliver") + }) + msgs, err := queue.List(ctxbg, queue.Filter{}, queue.Sort{}) + tcheck(t, err, "queue list") + tcompare(t, len(msgs), 1) + tcompare(t, msgs[0].Extra, map[string]string{ + "Test": "testvalue", + "A": "123", + "☺": "☹", + "X-Canonical-Z": "ok", + }) + // note: these headers currently stay in the message. +} + +// TestExtraDup checks for an error for duplicate x-mox-extra-* keys. +func TestExtraDup(t *testing.T) { + ts := newTestServer(t, filepath.FromSlash("../testdata/smtp/mox.conf"), dns.MockResolver{}) + defer ts.close() + + ts.user = "mjl@mox.example" + ts.pass = password0 + ts.submission = true + + extraMsg := strings.ReplaceAll(`From: +To: +Subject: test +X-Mox-Extra-Test: testvalue +X-Mox-Extra-Test: testvalue +Message-Id: + +test email +`, "\n", "\r\n") + + ts.run(func(err error, client *smtpclient.Client) { + t.Helper() + tcheck(t, err, "init client") + mailFrom := "mjl@mox.example" + rcptTo := "mjl@mox.example" + err = client.Deliver(ctxbg, mailFrom, rcptTo, int64(len(extraMsg)), strings.NewReader(extraMsg), true, true, false) + var cerr smtpclient.Error + expErr := smtpclient.Error{Code: smtp.C554TransactionFailed, Secode: smtp.SeMsg6Other0} + if err == nil || !errors.As(err, &cerr) || cerr.Code != expErr.Code || cerr.Secode != expErr.Secode { + t.Fatalf("got err %#v, expected %#v", err, expErr) + } + }) +} diff --git a/store/account.go b/store/account.go index f52c5a3..3ce57f0 100644 --- a/store/account.go +++ b/store/account.go @@ -485,8 +485,8 @@ type Message struct { // filtering). IsMailingList bool - // If this message is a DSN. For DSNs, we don't look at the subject when matching - // threads. + // If this message is a DSN, generated by us or received. For DSNs, we don't look + // at the subject when matching threads. DSN bool ReceivedTLSVersion uint16 // 0 if unknown, 1 if plaintext/no TLS, otherwise TLS cipher suite. @@ -1265,7 +1265,8 @@ func (a *Account) HighestDeletedModSeq(tx *bstore.Tx) (ModSeq, error) { return v.HighestDeletedModSeq, err } -// WithWLock runs fn with account writelock held. Necessary for account/mailbox modification. For message delivery, a read lock is required. +// WithWLock runs fn with account writelock held. Necessary for account/mailbox +// modification. For message delivery, a read lock is required. func (a *Account) WithWLock(fn func()) { a.Lock() defer a.Unlock() @@ -2224,6 +2225,32 @@ func (f Flags) Changed(other Flags) (mask Flags) { return } +// Strings returns the flags that are set in their string form. +func (f Flags) Strings() []string { + fields := []struct { + word string + have bool + }{ + {`$forwarded`, f.Forwarded}, + {`$junk`, f.Junk}, + {`$mdnsent`, f.MDNSent}, + {`$notjunk`, f.Notjunk}, + {`$phishing`, f.Phishing}, + {`\answered`, f.Answered}, + {`\deleted`, f.Deleted}, + {`\draft`, f.Draft}, + {`\flagged`, f.Flagged}, + {`\seen`, f.Seen}, + } + var l []string + for _, fh := range fields { + if fh.have { + l = append(l, fh.word) + } + } + return l +} + var systemWellKnownFlags = map[string]bool{ `\answered`: true, `\flagged`: true, diff --git a/testdata/ctl/domains.conf b/testdata/ctl/domains.conf index 449f11e..d684d0b 100644 --- a/testdata/ctl/domains.conf +++ b/testdata/ctl/domains.conf @@ -2,6 +2,10 @@ Domains: mox.example: nil Accounts: mjl: + OutgoingWebhook: + URL: http://localhost:1234 + KeepRetiredMessagePeriod: 1h0m0s + KeepRetiredWebhookPeriod: 1h0m0s Domain: mox.example Destinations: mjl@mox.example: nil diff --git a/testdata/httpaccount/domains.conf b/testdata/httpaccount/domains.conf index 8931695..67191f1 100644 --- a/testdata/httpaccount/domains.conf +++ b/testdata/httpaccount/domains.conf @@ -1,5 +1,6 @@ Domains: - mox.example: nil + mox.example: + LocalpartCatchallSeparator: + Accounts: mjl☺: Domain: mox.example diff --git a/testdata/queue/domains.conf b/testdata/queue/domains.conf index 25e5a56..c9fc635 100644 --- a/testdata/queue/domains.conf +++ b/testdata/queue/domains.conf @@ -1,10 +1,33 @@ Domains: - mox.example: nil + mox.example: + LocalpartCatchallSeparator: + Accounts: mjl: Domain: mox.example Destinations: mjl@mox.example: nil + retired: + Domain: mox.example + Destinations: + retired@mox.example: nil + KeepRetiredMessagePeriod: 1ns + KeepRetiredWebhookPeriod: 1ns + OutgoingWebhook: + URL: http://localhost:1234/outgoing + Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= + IncomingWebhook: + URL: http://localhost:1234/incoming + Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= + hook: + Domain: mox.example + Destinations: + hook@mox.example: nil + OutgoingWebhook: + URL: http://localhost:1234/outgoing + Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= + IncomingWebhook: + URL: http://localhost:1234/incoming + Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= Routes: - diff --git a/testdata/webapisrv/domains.conf b/testdata/webapisrv/domains.conf new file mode 100644 index 0000000..221b6a7 --- /dev/null +++ b/testdata/webapisrv/domains.conf @@ -0,0 +1,32 @@ +Domains: + mox.example: + LocalpartCatchallSeparator: + + DKIM: + Selectors: + testsel: + PrivateKeyFile: testsel.rsakey.pkcs8.pem + Sign: + - testsel +Accounts: + other: + Domain: mox.example + Destinations: + other@mox.example: nil + mjl: + MaxOutgoingMessagesPerDay: 30 + MaxFirstTimeRecipientsPerDay: 10 + Domain: mox.example + FromIDLoginAddresses: + - mjl+fromid@mox.example + Destinations: + mjl@mox.example: nil + møx@mox.example: nil + móx@mox.example: nil + RejectsMailbox: Rejects + JunkFilter: + Threshold: 0.95 + Params: + Twograms: true + MaxPower: 0.1 + TopWords: 10 + IgnoreWords: 0.1 diff --git a/testdata/webapisrv/mox.conf b/testdata/webapisrv/mox.conf new file mode 100644 index 0000000..1370e33 --- /dev/null +++ b/testdata/webapisrv/mox.conf @@ -0,0 +1,11 @@ +DataDir: data +User: 1000 +LogLevel: trace +Hostname: mox.example +Listeners: + local: + IPs: + - 0.0.0.0 +Postmaster: + Account: mjl + Mailbox: postmaster diff --git a/testdata/webapisrv/testsel.rsakey.pkcs8.pem b/testdata/webapisrv/testsel.rsakey.pkcs8.pem new file mode 100644 index 0000000..73d742c --- /dev/null +++ b/testdata/webapisrv/testsel.rsakey.pkcs8.pem @@ -0,0 +1,30 @@ +-----BEGIN PRIVATE KEY----- +Note: RSA private key for use with DKIM, generated by mox + +MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDdkh3fKzvRUWym +n9UwVrEw6s2Mc0+DTg04TWJKGKHXpvcTHuEcE6ALVS9MZKasyVsIHU7FNeS9/qNb +pLihhGdlhU3KAfrMpTBhiFpJoYiDXED98Of4iBxNHIuheLMxSBSClMbLGE2vAgha +/6LuONuzdMqk/c1TijBD+vGjCZI2qD58cgXWWKRK9e+WNhKNoVdedZ9iJtbtN0MI +UWk3iwHmjXf5qzS7i8vDoy86Ln0HW0vKl7UtwemLVv09/E23OdNN163eQvSlrEhx +a0odPQsM9SizxhiaI9rmcZtSqULt37hhPaNA+/AbELCzWijZPDqePVRqKGd5gYDK +8STLj0UHAgMBAAECggEBAKVkJJgplYUx2oCmXmSu0aVKIBTvHjNNV+DnIq9co7Ju +F5BWRILIw3ayJ5RGrYPc6e6ssdfT2uNX6GjIFGm8g9HsJ5zazXNk+zBSr9K2mUg0 +3O6xnPaP41BMNo5ZoqjuvSCcHagMhDBWvBXxLJXWK2lRjNKMAXCSfmTANQ8WXeYd +XG2nYTPtBu6UgY8W6sKAx1xetxBrzk8q6JTxb5eVG22BSiUniWYif+XVmAj1u6TH +0m6X0Kb6zsMYYgKPC2hmDsxD3uZ7qBNxxJzzLjpK6eP9aeFKzNyfnaoO4s+9K6Di +31oxTBpqLI4dcrvg4xWl+YkEknXXaomMqM8hyDzfcAECgYEA9/zmjRpoTAoY3fu9 +mn16wxReFXZZZhqV0+c+gyYtao2Kf2pUNAdhD62HQv7KtAPPHKvLfL8PH0u7bzK0 +vVNzBUukwxGI7gsoTMdc3L5x4v9Yb6jUx7RrDZn93sDod/1f/sb56ARCFQoqbUck +dSjnVUyF/l5oeh6CgKhvtghJ/AcCgYEA5Lq4kL82qWjIuNUT/C3lzjPfQVU+WvQ9 +wa+x4B4mxm5r4na3AU1T8H+peh4YstAJUgscGfYnLzxuMGuP1ReIuWYy29eDptKl +WTzVZDcZrAPciP1FOL6jm03PT2UAEuoPRr4OHLg8DxoOqG8pxqk1izDSHG2Tof6l +0ToafeIALwECgYEA8wvLTgnOpI/U1WNP7aUDd0Rz/WbzsW1m4Lsn+lOleWPllIE6 +q4974mi5Q8ECG7IL/9aj5cw/XvXTauVwXIn4Ff2QKpr58AvBYJaX/cUtS0PlgfIf +MOczcK43MWUxscADoGmVLn9V4NcIw/dQ1P7U0zXfsXEHxoA2eTAb5HV1RWsCgYBd +TcXoVfgIV1Q6AcGrR1XNLd/OmOVc2PEwR2l6ERKkM3sS4HZ6s36gRpNt20Ub/D0x +GJMYDA+j9zTDz7zWokkFyCjLATkVHiyRIH2z6b4xK0oVH6vTIAFBYxZEPuEu1gfx +RaogEQ9+4ZRFJUOXZIMRCpNLQW/Nz0D4/oi7/SsyAQKBgHEA27Js8ivt+EFCBjwB +UbkW+LonDAXuUbw91lh5jICCigqUg73HNmV5xpoYI9JNPc6fy6wLyInVUC2w9tpO +eH2Rl8n79vQMLbzsFClGEC/Q1kAbK5bwUjlfvKBZjvE0RknWX9e1ZY04DSsunSrM +prS2eHVZ24hecd7j9XfAbHLC +-----END PRIVATE KEY----- diff --git a/tlsrptsend/send.go b/tlsrptsend/send.go index 131dc82..08bc2e7 100644 --- a/tlsrptsend/send.go +++ b/tlsrptsend/send.go @@ -589,7 +589,7 @@ Period: %s - %s UTC continue } - qm := queue.MakeMsg(from.Path(), rcpt.Address.Path(), has8bit, smtputf8, msgSize, messageID, []byte(msgPrefix), nil, time.Now()) + qm := queue.MakeMsg(from.Path(), rcpt.Address.Path(), has8bit, smtputf8, msgSize, messageID, []byte(msgPrefix), nil, time.Now(), subject) // Don't try as long as regular deliveries, and stop before we would send the // delayed DSN. Though we also won't send that due to IsTLSReport. // ../rfc/8460:1077 @@ -662,7 +662,7 @@ func composeMessage(ctx context.Context, log mlog.Log, mf *os.File, policyDomain xc.Line() // Textual part, just mentioning this is a TLS report. - textBody, ct, cte := xc.TextPart(text) + textBody, ct, cte := xc.TextPart("plain", text) textHdr := textproto.MIMEHeader{} textHdr.Set("Content-Type", ct) textHdr.Set("Content-Transfer-Encoding", cte) diff --git a/webaccount/account.go b/webaccount/account.go index 6ea8781..d32086c 100644 --- a/webaccount/account.go +++ b/webaccount/account.go @@ -5,6 +5,7 @@ package webaccount import ( "archive/tar" "archive/zip" + "bytes" "compress/gzip" "context" cryptorand "crypto/rand" @@ -15,9 +16,11 @@ import ( "io" "log/slog" "net/http" + "net/url" "os" "path/filepath" "strings" + "time" _ "embed" @@ -30,8 +33,12 @@ import ( "github.com/mjl-/mox/mlog" "github.com/mjl-/mox/mox-" "github.com/mjl-/mox/moxvar" + "github.com/mjl-/mox/queue" + "github.com/mjl-/mox/smtp" "github.com/mjl-/mox/store" + "github.com/mjl-/mox/webapi" "github.com/mjl-/mox/webauth" + "github.com/mjl-/mox/webhook" ) var pkglog = mlog.New("webaccount", nil) @@ -414,7 +421,7 @@ func (Account) SetPassword(ctx context.Context, password string) { // Account returns information about the account. // StorageUsed is the sum of the sizes of all messages, in bytes. // StorageLimit is the maximum storage that can be used, or 0 if there is no limit. -func (Account) Account(ctx context.Context) (account config.Account, storageUsed, storageLimit int64) { +func (Account) Account(ctx context.Context) (account config.Account, storageUsed, storageLimit int64, suppressions []webapi.Suppression) { log := pkglog.WithContext(ctx) reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) @@ -439,16 +446,19 @@ func (Account) Account(ctx context.Context) (account config.Account, storageUsed xcheckf(ctx, err, "get disk usage") }) - return accConf, storageUsed, storageLimit + suppressions, err = queue.SuppressionList(ctx, reqInfo.AccountName) + xcheckf(ctx, err, "list suppressions") + + return accConf, storageUsed, storageLimit, suppressions } +// AccountSaveFullName saves the full name (used as display name in email messages) +// for the account. func (Account) AccountSaveFullName(ctx context.Context, fullName string) { reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) - _, ok := mox.Conf.Account(reqInfo.AccountName) - if !ok { - xcheckf(ctx, errors.New("not found"), "looking up account") - } - err := mox.AccountFullNameSave(ctx, reqInfo.AccountName, fullName) + err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) { + acc.FullName = fullName + }) xcheckf(ctx, err, "saving account full name") } @@ -457,25 +467,29 @@ func (Account) AccountSaveFullName(ctx context.Context, fullName string) { // error is returned. Otherwise newDest is saved and the configuration reloaded. func (Account) DestinationSave(ctx context.Context, destName string, oldDest, newDest config.Destination) { reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) - accConf, ok := mox.Conf.Account(reqInfo.AccountName) - if !ok { - xcheckf(ctx, errors.New("not found"), "looking up account") - } - curDest, ok := accConf.Destinations[destName] - if !ok { - xcheckuserf(ctx, errors.New("not found"), "looking up destination") - } - if !curDest.Equal(oldDest) { - xcheckuserf(ctx, errors.New("modified"), "checking stored destination") - } + err := mox.AccountSave(ctx, reqInfo.AccountName, func(conf *config.Account) { + curDest, ok := conf.Destinations[destName] + if !ok { + xcheckuserf(ctx, errors.New("not found"), "looking up destination") + } + if !curDest.Equal(oldDest) { + xcheckuserf(ctx, errors.New("modified"), "checking stored destination") + } - // Keep fields we manage. - newDest.DMARCReports = curDest.DMARCReports - newDest.HostTLSReports = curDest.HostTLSReports - newDest.DomainTLSReports = curDest.DomainTLSReports + // Keep fields we manage. + newDest.DMARCReports = curDest.DMARCReports + newDest.HostTLSReports = curDest.HostTLSReports + newDest.DomainTLSReports = curDest.DomainTLSReports - err := mox.DestinationSave(ctx, reqInfo.AccountName, destName, newDest) + // Make copy of reference values. + nd := map[string]config.Destination{} + for dn, d := range conf.Destinations { + nd[dn] = d + } + nd[destName] = newDest + conf.Destinations = nd + }) xcheckf(ctx, err, "saving destination") } @@ -491,3 +505,159 @@ func (Account) ImportAbort(ctx context.Context, importToken string) error { func (Account) Types() (importProgress ImportProgress) { return } + +// SuppressionList lists the addresses on the suppression list of this account. +func (Account) SuppressionList(ctx context.Context) (suppressions []webapi.Suppression) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + l, err := queue.SuppressionList(ctx, reqInfo.AccountName) + xcheckf(ctx, err, "list suppressions") + return l +} + +// SuppressionAdd adds an email address to the suppression list. +func (Account) SuppressionAdd(ctx context.Context, address string, manual bool, reason string) (suppression webapi.Suppression) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + addr, err := smtp.ParseAddress(address) + xcheckuserf(ctx, err, "parsing address") + sup := webapi.Suppression{ + Account: reqInfo.AccountName, + Manual: manual, + Reason: reason, + } + err = queue.SuppressionAdd(ctx, addr.Path(), &sup) + if err != nil && errors.Is(err, bstore.ErrUnique) { + xcheckuserf(ctx, err, "add suppression") + } + xcheckf(ctx, err, "add suppression") + return sup +} + +// SuppressionRemove removes the email address from the suppression list. +func (Account) SuppressionRemove(ctx context.Context, address string) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + addr, err := smtp.ParseAddress(address) + xcheckuserf(ctx, err, "parsing address") + err = queue.SuppressionRemove(ctx, reqInfo.AccountName, addr.Path()) + if err != nil && err == bstore.ErrAbsent { + xcheckuserf(ctx, err, "remove suppression") + } + xcheckf(ctx, err, "remove suppression") +} + +// OutgoingWebhookSave saves a new webhook url for outgoing deliveries. If url +// is empty, the webhook is disabled. If authorization is non-empty it is used for +// the Authorization header in HTTP requests. Events specifies the outgoing events +// to be delivered, or all if empty/nil. +func (Account) OutgoingWebhookSave(ctx context.Context, url, authorization string, events []string) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) { + if url == "" { + acc.OutgoingWebhook = nil + } else { + acc.OutgoingWebhook = &config.OutgoingWebhook{URL: url, Authorization: authorization, Events: events} + } + }) + if err != nil && errors.Is(err, mox.ErrConfig) { + xcheckuserf(ctx, err, "saving account outgoing webhook") + } + xcheckf(ctx, err, "saving account outgoing webhook") +} + +// OutgoingWebhookTest makes a test webhook call to urlStr, with optional +// authorization. If the HTTP request is made this call will succeed also for +// non-2xx HTTP status codes. +func (Account) OutgoingWebhookTest(ctx context.Context, urlStr, authorization string, data webhook.Outgoing) (code int, response string, errmsg string) { + log := pkglog.WithContext(ctx) + + xvalidURL(ctx, urlStr) + log.Debug("making webhook test call for outgoing message", slog.String("url", urlStr)) + + var b bytes.Buffer + enc := json.NewEncoder(&b) + enc.SetIndent("", "\t") + enc.SetEscapeHTML(false) + err := enc.Encode(data) + xcheckf(ctx, err, "encoding outgoing webhook data") + + code, response, err = queue.HookPost(ctx, log, 1, 1, urlStr, authorization, b.String()) + if err != nil { + errmsg = err.Error() + } + log.Debugx("result for webhook test call for outgoing message", err, slog.Int("code", code), slog.String("response", response)) + return code, response, errmsg +} + +// IncomingWebhookSave saves a new webhook url for incoming deliveries. If url is +// empty, the webhook is disabled. If authorization is not empty, it is used in +// the Authorization header in requests. +func (Account) IncomingWebhookSave(ctx context.Context, url, authorization string) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) { + if url == "" { + acc.IncomingWebhook = nil + } else { + acc.IncomingWebhook = &config.IncomingWebhook{URL: url, Authorization: authorization} + } + }) + if err != nil && errors.Is(err, mox.ErrConfig) { + xcheckuserf(ctx, err, "saving account incoming webhook") + } + xcheckf(ctx, err, "saving account incoming webhook") +} + +func xvalidURL(ctx context.Context, s string) { + u, err := url.Parse(s) + xcheckuserf(ctx, err, "parsing url") + if u.Scheme != "http" && u.Scheme != "https" { + xcheckuserf(ctx, errors.New("scheme must be http or https"), "parsing url") + } +} + +// IncomingWebhookTest makes a test webhook HTTP delivery request to urlStr, +// with optional authorization header. If the HTTP call is made, this function +// returns non-error regardless of HTTP status code. +func (Account) IncomingWebhookTest(ctx context.Context, urlStr, authorization string, data webhook.Incoming) (code int, response string, errmsg string) { + log := pkglog.WithContext(ctx) + + xvalidURL(ctx, urlStr) + log.Debug("making webhook test call for incoming message", slog.String("url", urlStr)) + + var b bytes.Buffer + enc := json.NewEncoder(&b) + enc.SetEscapeHTML(false) + enc.SetIndent("", "\t") + err := enc.Encode(data) + xcheckf(ctx, err, "encoding incoming webhook data") + code, response, err = queue.HookPost(ctx, log, 1, 1, urlStr, authorization, b.String()) + if err != nil { + errmsg = err.Error() + } + log.Debugx("result for webhook test call for incoming message", err, slog.Int("code", code), slog.String("response", response)) + return code, response, errmsg +} + +// FromIDLoginAddressesSave saves new login addresses to enable unique SMTP +// MAIL FROM addresses ("fromid") for deliveries from the queue. +func (Account) FromIDLoginAddressesSave(ctx context.Context, loginAddresses []string) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) { + acc.FromIDLoginAddresses = loginAddresses + }) + if err != nil && errors.Is(err, mox.ErrConfig) { + xcheckuserf(ctx, err, "saving account fromid login addresses") + } + xcheckf(ctx, err, "saving account fromid login addresses") +} + +// KeepRetiredPeriodsSave save periods to save retired messages and webhooks. +func (Account) KeepRetiredPeriodsSave(ctx context.Context, keepRetiredMessagePeriod, keepRetiredWebhookPeriod time.Duration) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) { + acc.KeepRetiredMessagePeriod = keepRetiredMessagePeriod + acc.KeepRetiredWebhookPeriod = keepRetiredWebhookPeriod + }) + if err != nil && errors.Is(err, mox.ErrConfig) { + xcheckuserf(ctx, err, "saving account keep retired periods") + } + xcheckf(ctx, err, "saving account keep retired periods") +} diff --git a/webaccount/account.html b/webaccount/account.html index b8a612e..6152090 100644 --- a/webaccount/account.html +++ b/webaccount/account.html @@ -14,6 +14,7 @@ h2 { font-size: 1.1rem; } h3, h4 { font-size: 1rem; } ul { padding-left: 1rem; } .literal { background-color: #eee; padding: .5em 1em; border: 1px solid #eee; border-radius: 4px; white-space: pre-wrap; font-family: monospace; font-size: 15px; tab-size: 4; } +table { border-spacing: 0; } table td, table th { padding: .2em .5em; } table > tbody > tr:nth-child(odd) { background-color: #f8f8f8; } table.slim td, table.slim th { padding: 0; } @@ -23,8 +24,8 @@ p { margin-bottom: 1em; max-width: 50em; } fieldset { border: 0; } .scriptswitch { text-decoration: underline #dca053 2px; } thead { position: sticky; top: 0; background-color: white; box-shadow: 0 1px 1px rgba(0, 0, 0, 0.1); } -#page { opacity: 1; animation: fadein 0.15s ease-in; } -#page.loading { opacity: 0.1; animation: fadeout 1s ease-out; } +#page, .loadend { opacity: 1; animation: fadein 0.15s ease-in; } +#page.loading, .loadstart { opacity: 0.1; animation: fadeout 1s ease-out; } @keyframes fadein { 0% { opacity: 0 } 100% { opacity: 1 } } @keyframes fadeout { 0% { opacity: 1 } 100% { opacity: 0.1 } } .autosize { display: inline-grid; max-width: 90vw; } diff --git a/webaccount/account.js b/webaccount/account.js index 8aa5a12..41753a3 100644 --- a/webaccount/account.js +++ b/webaccount/account.js @@ -220,6 +220,7 @@ const [dom, style, attr, prop] = (function () { autocomplete: (s) => _attr('autocomplete', s), list: (s) => _attr('list', s), form: (s) => _attr('form', s), + size: (s) => _attr('size', s), }; const style = (x) => { return { _styles: x }; }; const prop = (x) => { return { _props: x }; }; @@ -228,11 +229,39 @@ const [dom, style, attr, prop] = (function () { // NOTE: GENERATED by github.com/mjl-/sherpats, DO NOT MODIFY var api; (function (api) { - api.structTypes = { "Account": true, "AutomaticJunkFlags": true, "Destination": true, "Domain": true, "ImportProgress": true, "JunkFilter": true, "Route": true, "Ruleset": true, "SubjectPass": true }; - api.stringsTypes = { "CSRFToken": true }; + // OutgoingEvent is an activity for an outgoing delivery. Either generated by the + // queue, or through an incoming DSN (delivery status notification) message. + let OutgoingEvent; + (function (OutgoingEvent) { + // Message was accepted by a next-hop server. This does not necessarily mean the + // message has been delivered in the mailbox of the user. + OutgoingEvent["EventDelivered"] = "delivered"; + // Outbound delivery was suppressed because the recipient address is on the + // suppression list of the account, or a simplified/base variant of the address is. + OutgoingEvent["EventSuppressed"] = "suppressed"; + OutgoingEvent["EventDelayed"] = "delayed"; + // Delivery of the message failed and will not be tried again. Also see the + // "Suppressing" field of [Outgoing]. + OutgoingEvent["EventFailed"] = "failed"; + // Message was relayed into a system that does not generate DSNs. Should only + // happen when explicitly requested. + OutgoingEvent["EventRelayed"] = "relayed"; + // Message was accepted and is being delivered to multiple recipients (e.g. the + // address was an alias/list), which may generate more DSNs. + OutgoingEvent["EventExpanded"] = "expanded"; + OutgoingEvent["EventCanceled"] = "canceled"; + // An incoming message was received that was either a DSN with an unknown event + // type ("action"), or an incoming non-DSN-message was received for the unique + // per-outgoing-message address used for sending. + OutgoingEvent["EventUnrecognized"] = "unrecognized"; + })(OutgoingEvent = api.OutgoingEvent || (api.OutgoingEvent = {})); + api.structTypes = { "Account": true, "AutomaticJunkFlags": true, "Destination": true, "Domain": true, "ImportProgress": true, "Incoming": true, "IncomingMeta": true, "IncomingWebhook": true, "JunkFilter": true, "NameAddress": true, "Outgoing": true, "OutgoingWebhook": true, "Route": true, "Ruleset": true, "Structure": true, "SubjectPass": true, "Suppression": true }; + api.stringsTypes = { "CSRFToken": true, "OutgoingEvent": true }; api.intsTypes = {}; api.types = { - "Account": { "Name": "Account", "Docs": "", "Fields": [{ "Name": "Domain", "Docs": "", "Typewords": ["string"] }, { "Name": "Description", "Docs": "", "Typewords": ["string"] }, { "Name": "FullName", "Docs": "", "Typewords": ["string"] }, { "Name": "Destinations", "Docs": "", "Typewords": ["{}", "Destination"] }, { "Name": "SubjectPass", "Docs": "", "Typewords": ["SubjectPass"] }, { "Name": "QuotaMessageSize", "Docs": "", "Typewords": ["int64"] }, { "Name": "RejectsMailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "KeepRejects", "Docs": "", "Typewords": ["bool"] }, { "Name": "AutomaticJunkFlags", "Docs": "", "Typewords": ["AutomaticJunkFlags"] }, { "Name": "JunkFilter", "Docs": "", "Typewords": ["nullable", "JunkFilter"] }, { "Name": "MaxOutgoingMessagesPerDay", "Docs": "", "Typewords": ["int32"] }, { "Name": "MaxFirstTimeRecipientsPerDay", "Docs": "", "Typewords": ["int32"] }, { "Name": "NoFirstTimeSenderDelay", "Docs": "", "Typewords": ["bool"] }, { "Name": "Routes", "Docs": "", "Typewords": ["[]", "Route"] }, { "Name": "DNSDomain", "Docs": "", "Typewords": ["Domain"] }] }, + "Account": { "Name": "Account", "Docs": "", "Fields": [{ "Name": "OutgoingWebhook", "Docs": "", "Typewords": ["nullable", "OutgoingWebhook"] }, { "Name": "IncomingWebhook", "Docs": "", "Typewords": ["nullable", "IncomingWebhook"] }, { "Name": "FromIDLoginAddresses", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "KeepRetiredMessagePeriod", "Docs": "", "Typewords": ["int64"] }, { "Name": "KeepRetiredWebhookPeriod", "Docs": "", "Typewords": ["int64"] }, { "Name": "Domain", "Docs": "", "Typewords": ["string"] }, { "Name": "Description", "Docs": "", "Typewords": ["string"] }, { "Name": "FullName", "Docs": "", "Typewords": ["string"] }, { "Name": "Destinations", "Docs": "", "Typewords": ["{}", "Destination"] }, { "Name": "SubjectPass", "Docs": "", "Typewords": ["SubjectPass"] }, { "Name": "QuotaMessageSize", "Docs": "", "Typewords": ["int64"] }, { "Name": "RejectsMailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "KeepRejects", "Docs": "", "Typewords": ["bool"] }, { "Name": "AutomaticJunkFlags", "Docs": "", "Typewords": ["AutomaticJunkFlags"] }, { "Name": "JunkFilter", "Docs": "", "Typewords": ["nullable", "JunkFilter"] }, { "Name": "MaxOutgoingMessagesPerDay", "Docs": "", "Typewords": ["int32"] }, { "Name": "MaxFirstTimeRecipientsPerDay", "Docs": "", "Typewords": ["int32"] }, { "Name": "NoFirstTimeSenderDelay", "Docs": "", "Typewords": ["bool"] }, { "Name": "Routes", "Docs": "", "Typewords": ["[]", "Route"] }, { "Name": "DNSDomain", "Docs": "", "Typewords": ["Domain"] }] }, + "OutgoingWebhook": { "Name": "OutgoingWebhook", "Docs": "", "Fields": [{ "Name": "URL", "Docs": "", "Typewords": ["string"] }, { "Name": "Authorization", "Docs": "", "Typewords": ["string"] }, { "Name": "Events", "Docs": "", "Typewords": ["[]", "string"] }] }, + "IncomingWebhook": { "Name": "IncomingWebhook", "Docs": "", "Fields": [{ "Name": "URL", "Docs": "", "Typewords": ["string"] }, { "Name": "Authorization", "Docs": "", "Typewords": ["string"] }] }, "Destination": { "Name": "Destination", "Docs": "", "Fields": [{ "Name": "Mailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "Rulesets", "Docs": "", "Typewords": ["[]", "Ruleset"] }, { "Name": "FullName", "Docs": "", "Typewords": ["string"] }] }, "Ruleset": { "Name": "Ruleset", "Docs": "", "Fields": [{ "Name": "SMTPMailFromRegexp", "Docs": "", "Typewords": ["string"] }, { "Name": "VerifiedDomain", "Docs": "", "Typewords": ["string"] }, { "Name": "HeadersRegexp", "Docs": "", "Typewords": ["{}", "string"] }, { "Name": "IsForward", "Docs": "", "Typewords": ["bool"] }, { "Name": "ListAllowDomain", "Docs": "", "Typewords": ["string"] }, { "Name": "AcceptRejectsToMailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "Mailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "VerifiedDNSDomain", "Docs": "", "Typewords": ["Domain"] }, { "Name": "ListAllowDNSDomain", "Docs": "", "Typewords": ["Domain"] }] }, "Domain": { "Name": "Domain", "Docs": "", "Fields": [{ "Name": "ASCII", "Docs": "", "Typewords": ["string"] }, { "Name": "Unicode", "Docs": "", "Typewords": ["string"] }] }, @@ -240,11 +269,20 @@ var api; "AutomaticJunkFlags": { "Name": "AutomaticJunkFlags", "Docs": "", "Fields": [{ "Name": "Enabled", "Docs": "", "Typewords": ["bool"] }, { "Name": "JunkMailboxRegexp", "Docs": "", "Typewords": ["string"] }, { "Name": "NeutralMailboxRegexp", "Docs": "", "Typewords": ["string"] }, { "Name": "NotJunkMailboxRegexp", "Docs": "", "Typewords": ["string"] }] }, "JunkFilter": { "Name": "JunkFilter", "Docs": "", "Fields": [{ "Name": "Threshold", "Docs": "", "Typewords": ["float64"] }, { "Name": "Onegrams", "Docs": "", "Typewords": ["bool"] }, { "Name": "Twograms", "Docs": "", "Typewords": ["bool"] }, { "Name": "Threegrams", "Docs": "", "Typewords": ["bool"] }, { "Name": "MaxPower", "Docs": "", "Typewords": ["float64"] }, { "Name": "TopWords", "Docs": "", "Typewords": ["int32"] }, { "Name": "IgnoreWords", "Docs": "", "Typewords": ["float64"] }, { "Name": "RareWords", "Docs": "", "Typewords": ["int32"] }] }, "Route": { "Name": "Route", "Docs": "", "Fields": [{ "Name": "FromDomain", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "ToDomain", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "MinimumAttempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "Transport", "Docs": "", "Typewords": ["string"] }, { "Name": "FromDomainASCII", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "ToDomainASCII", "Docs": "", "Typewords": ["[]", "string"] }] }, + "Suppression": { "Name": "Suppression", "Docs": "", "Fields": [{ "Name": "ID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Created", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "BaseAddress", "Docs": "", "Typewords": ["string"] }, { "Name": "OriginalAddress", "Docs": "", "Typewords": ["string"] }, { "Name": "Manual", "Docs": "", "Typewords": ["bool"] }, { "Name": "Reason", "Docs": "", "Typewords": ["string"] }] }, "ImportProgress": { "Name": "ImportProgress", "Docs": "", "Fields": [{ "Name": "Token", "Docs": "", "Typewords": ["string"] }] }, + "Outgoing": { "Name": "Outgoing", "Docs": "", "Fields": [{ "Name": "Version", "Docs": "", "Typewords": ["int32"] }, { "Name": "Event", "Docs": "", "Typewords": ["OutgoingEvent"] }, { "Name": "DSN", "Docs": "", "Typewords": ["bool"] }, { "Name": "Suppressing", "Docs": "", "Typewords": ["bool"] }, { "Name": "QueueMsgID", "Docs": "", "Typewords": ["int64"] }, { "Name": "FromID", "Docs": "", "Typewords": ["string"] }, { "Name": "MessageID", "Docs": "", "Typewords": ["string"] }, { "Name": "Subject", "Docs": "", "Typewords": ["string"] }, { "Name": "WebhookQueued", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "SMTPCode", "Docs": "", "Typewords": ["int32"] }, { "Name": "SMTPEnhancedCode", "Docs": "", "Typewords": ["string"] }, { "Name": "Error", "Docs": "", "Typewords": ["string"] }, { "Name": "Extra", "Docs": "", "Typewords": ["{}", "string"] }] }, + "Incoming": { "Name": "Incoming", "Docs": "", "Fields": [{ "Name": "Version", "Docs": "", "Typewords": ["int32"] }, { "Name": "From", "Docs": "", "Typewords": ["[]", "NameAddress"] }, { "Name": "To", "Docs": "", "Typewords": ["[]", "NameAddress"] }, { "Name": "CC", "Docs": "", "Typewords": ["[]", "NameAddress"] }, { "Name": "BCC", "Docs": "", "Typewords": ["[]", "NameAddress"] }, { "Name": "ReplyTo", "Docs": "", "Typewords": ["[]", "NameAddress"] }, { "Name": "Subject", "Docs": "", "Typewords": ["string"] }, { "Name": "MessageID", "Docs": "", "Typewords": ["string"] }, { "Name": "InReplyTo", "Docs": "", "Typewords": ["string"] }, { "Name": "References", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "Date", "Docs": "", "Typewords": ["nullable", "timestamp"] }, { "Name": "Text", "Docs": "", "Typewords": ["string"] }, { "Name": "HTML", "Docs": "", "Typewords": ["string"] }, { "Name": "Structure", "Docs": "", "Typewords": ["Structure"] }, { "Name": "Meta", "Docs": "", "Typewords": ["IncomingMeta"] }] }, + "NameAddress": { "Name": "NameAddress", "Docs": "", "Fields": [{ "Name": "Name", "Docs": "", "Typewords": ["string"] }, { "Name": "Address", "Docs": "", "Typewords": ["string"] }] }, + "Structure": { "Name": "Structure", "Docs": "", "Fields": [{ "Name": "ContentType", "Docs": "", "Typewords": ["string"] }, { "Name": "ContentTypeParams", "Docs": "", "Typewords": ["{}", "string"] }, { "Name": "ContentID", "Docs": "", "Typewords": ["string"] }, { "Name": "DecodedSize", "Docs": "", "Typewords": ["int64"] }, { "Name": "Parts", "Docs": "", "Typewords": ["[]", "Structure"] }] }, + "IncomingMeta": { "Name": "IncomingMeta", "Docs": "", "Fields": [{ "Name": "MsgID", "Docs": "", "Typewords": ["int64"] }, { "Name": "MailFrom", "Docs": "", "Typewords": ["string"] }, { "Name": "MailFromValidated", "Docs": "", "Typewords": ["bool"] }, { "Name": "MsgFromValidated", "Docs": "", "Typewords": ["bool"] }, { "Name": "RcptTo", "Docs": "", "Typewords": ["string"] }, { "Name": "DKIMVerifiedDomains", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "RemoteIP", "Docs": "", "Typewords": ["string"] }, { "Name": "Received", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "MailboxName", "Docs": "", "Typewords": ["string"] }, { "Name": "Automated", "Docs": "", "Typewords": ["bool"] }] }, "CSRFToken": { "Name": "CSRFToken", "Docs": "", "Values": null }, + "OutgoingEvent": { "Name": "OutgoingEvent", "Docs": "", "Values": [{ "Name": "EventDelivered", "Value": "delivered", "Docs": "" }, { "Name": "EventSuppressed", "Value": "suppressed", "Docs": "" }, { "Name": "EventDelayed", "Value": "delayed", "Docs": "" }, { "Name": "EventFailed", "Value": "failed", "Docs": "" }, { "Name": "EventRelayed", "Value": "relayed", "Docs": "" }, { "Name": "EventExpanded", "Value": "expanded", "Docs": "" }, { "Name": "EventCanceled", "Value": "canceled", "Docs": "" }, { "Name": "EventUnrecognized", "Value": "unrecognized", "Docs": "" }] }, }; api.parser = { Account: (v) => api.parse("Account", v), + OutgoingWebhook: (v) => api.parse("OutgoingWebhook", v), + IncomingWebhook: (v) => api.parse("IncomingWebhook", v), Destination: (v) => api.parse("Destination", v), Ruleset: (v) => api.parse("Ruleset", v), Domain: (v) => api.parse("Domain", v), @@ -252,8 +290,15 @@ var api; AutomaticJunkFlags: (v) => api.parse("AutomaticJunkFlags", v), JunkFilter: (v) => api.parse("JunkFilter", v), Route: (v) => api.parse("Route", v), + Suppression: (v) => api.parse("Suppression", v), ImportProgress: (v) => api.parse("ImportProgress", v), + Outgoing: (v) => api.parse("Outgoing", v), + Incoming: (v) => api.parse("Incoming", v), + NameAddress: (v) => api.parse("NameAddress", v), + Structure: (v) => api.parse("Structure", v), + IncomingMeta: (v) => api.parse("IncomingMeta", v), CSRFToken: (v) => api.parse("CSRFToken", v), + OutgoingEvent: (v) => api.parse("OutgoingEvent", v), }; // Account exports web API functions for the account web interface. All its // methods are exported under api/. Function calls require valid HTTP @@ -322,10 +367,12 @@ var api; async Account() { const fn = "Account"; const paramTypes = []; - const returnTypes = [["Account"], ["int64"], ["int64"]]; + const returnTypes = [["Account"], ["int64"], ["int64"], ["[]", "Suppression"]]; const params = []; return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); } + // AccountSaveFullName saves the full name (used as display name in email messages) + // for the account. async AccountSaveFullName(fullName) { const fn = "AccountSaveFullName"; const paramTypes = [["string"]]; @@ -360,6 +407,88 @@ var api; const params = []; return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); } + // SuppressionList lists the addresses on the suppression list of this account. + async SuppressionList() { + const fn = "SuppressionList"; + const paramTypes = []; + const returnTypes = [["[]", "Suppression"]]; + const params = []; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // SuppressionAdd adds an email address to the suppression list. + async SuppressionAdd(address, manual, reason) { + const fn = "SuppressionAdd"; + const paramTypes = [["string"], ["bool"], ["string"]]; + const returnTypes = [["Suppression"]]; + const params = [address, manual, reason]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // SuppressionRemove removes the email address from the suppression list. + async SuppressionRemove(address) { + const fn = "SuppressionRemove"; + const paramTypes = [["string"]]; + const returnTypes = []; + const params = [address]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // OutgoingWebhookSave saves a new webhook url for outgoing deliveries. If url + // is empty, the webhook is disabled. If authorization is non-empty it is used for + // the Authorization header in HTTP requests. Events specifies the outgoing events + // to be delivered, or all if empty/nil. + async OutgoingWebhookSave(url, authorization, events) { + const fn = "OutgoingWebhookSave"; + const paramTypes = [["string"], ["string"], ["[]", "string"]]; + const returnTypes = []; + const params = [url, authorization, events]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // OutgoingWebhookTest makes a test webhook call to urlStr, with optional + // authorization. If the HTTP request is made this call will succeed also for + // non-2xx HTTP status codes. + async OutgoingWebhookTest(urlStr, authorization, data) { + const fn = "OutgoingWebhookTest"; + const paramTypes = [["string"], ["string"], ["Outgoing"]]; + const returnTypes = [["int32"], ["string"], ["string"]]; + const params = [urlStr, authorization, data]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // IncomingWebhookSave saves a new webhook url for incoming deliveries. If url is + // empty, the webhook is disabled. If authorization is not empty, it is used in + // the Authorization header in requests. + async IncomingWebhookSave(url, authorization) { + const fn = "IncomingWebhookSave"; + const paramTypes = [["string"], ["string"]]; + const returnTypes = []; + const params = [url, authorization]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // IncomingWebhookTest makes a test webhook HTTP delivery request to urlStr, + // with optional authorization header. If the HTTP call is made, this function + // returns non-error regardless of HTTP status code. + async IncomingWebhookTest(urlStr, authorization, data) { + const fn = "IncomingWebhookTest"; + const paramTypes = [["string"], ["string"], ["Incoming"]]; + const returnTypes = [["int32"], ["string"], ["string"]]; + const params = [urlStr, authorization, data]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // FromIDLoginAddressesSave saves new login addresses to enable unique SMTP + // MAIL FROM addresses ("fromid") for deliveries from the queue. + async FromIDLoginAddressesSave(loginAddresses) { + const fn = "FromIDLoginAddressesSave"; + const paramTypes = [["[]", "string"]]; + const returnTypes = []; + const params = [loginAddresses]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // KeepRetiredPeriodsSave save periods to save retired messages and webhooks. + async KeepRetiredPeriodsSave(keepRetiredMessagePeriod, keepRetiredWebhookPeriod) { + const fn = "KeepRetiredPeriodsSave"; + const paramTypes = [["int64"], ["int64"]]; + const returnTypes = []; + const params = [keepRetiredMessagePeriod, keepRetiredWebhookPeriod]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } } api.Client = Client; api.defaultBaseURL = (function () { @@ -753,6 +882,37 @@ const login = async (reason) => { username.focus(); }); }; +// Popup shows kids in a centered div with white background on top of a +// transparent overlay on top of the window. Clicking the overlay or hitting +// Escape closes the popup. Scrollbars are automatically added to the div with +// kids. Returns a function that removes the popup. +const popup = (...kids) => { + const origFocus = document.activeElement; + const close = () => { + if (!root.parentNode) { + return; + } + root.remove(); + if (origFocus && origFocus instanceof HTMLElement && origFocus.parentNode) { + origFocus.focus(); + } + }; + let content; + const root = dom.div(style({ position: 'fixed', top: 0, right: 0, bottom: 0, left: 0, backgroundColor: 'rgba(0, 0, 0, 0.1)', display: 'flex', alignItems: 'center', justifyContent: 'center', zIndex: '1' }), function keydown(e) { + if (e.key === 'Escape') { + e.stopPropagation(); + close(); + } + }, function click(e) { + e.stopPropagation(); + close(); + }, content = dom.div(attr.tabindex('0'), style({ backgroundColor: 'white', borderRadius: '.25em', padding: '1em', boxShadow: '0 0 20px rgba(0, 0, 0, 0.1)', border: '1px solid #ddd', maxWidth: '95vw', overflowX: 'auto', maxHeight: '95vh', overflowY: 'auto' }), function click(e) { + e.stopPropagation(); + }, kids)); + document.body.appendChild(root); + content.focus(); + return close; +}; const localStorageGet = (k) => { try { return window.localStorage.getItem(k); @@ -842,6 +1002,39 @@ const green = '#1dea20'; const yellow = '#ffe400'; const red = '#ff7443'; const blue = '#8bc8ff'; +const age = (date) => { + const r = dom.span(dom._class('notooltip'), attr.title(date.toString())); + const nowSecs = new Date().getTime() / 1000; + let t = nowSecs - date.getTime() / 1000; + let negative = ''; + if (t < 0) { + negative = '-'; + t = -t; + } + const minute = 60; + const hour = 60 * minute; + const day = 24 * hour; + const month = 30 * day; + const year = 365 * day; + const periods = [year, month, day, hour, minute]; + const suffix = ['y', 'mo', 'd', 'h', 'min']; + let s; + for (let i = 0; i < periods.length; i++) { + const p = periods[i]; + if (t >= 2 * p || i === periods.length - 1) { + const n = Math.round(t / p); + s = '' + n + suffix[i]; + break; + } + } + if (t < 60) { + s = '<1min'; + // Prevent showing '-<1min' when browser and server have relatively small time drift of max 1 minute. + negative = ''; + } + dom._kids(r, negative + s); + return r; +}; const formatQuotaSize = (v) => { if (v === 0) { return '0'; @@ -861,7 +1054,7 @@ const formatQuotaSize = (v) => { return '' + v; }; const index = async () => { - const [acc, storageUsed, storageLimit] = await client.Account(); + const [acc, storageUsed, storageLimit, suppressions] = await client.Account(); let fullNameForm; let fullNameFieldset; let fullName; @@ -870,12 +1063,78 @@ const index = async () => { let password1; let password2; let passwordHint; + let outgoingWebhookFieldset; + let outgoingWebhookURL; + let outgoingWebhookAuthorization; + let outgoingWebhookEvents; + let incomingWebhookFieldset; + let incomingWebhookURL; + let incomingWebhookAuthorization; + let keepRetiredPeriodsFieldset; + let keepRetiredMessagePeriod; + let keepRetiredWebhookPeriod; + let fromIDLoginAddressesFieldset; + const second = 1000 * 1000 * 1000; + const minute = 60 * second; + const hour = 60 * minute; + const day = 24 * hour; + const week = 7 * day; + const parseDuration = (s) => { + if (!s) { + return 0; + } + const xparseint = () => { + const v = parseInt(s.substring(0, s.length - 1)); + if (isNaN(v) || Math.round(v) !== v) { + throw new Error('bad number in duration'); + } + return v; + }; + if (s.endsWith('w')) { + return xparseint() * week; + } + if (s.endsWith('d')) { + return xparseint() * day; + } + if (s.endsWith('h')) { + return xparseint() * hour; + } + if (s.endsWith('m')) { + return xparseint() * minute; + } + if (s.endsWith('s')) { + return xparseint() * second; + } + throw new Error('bad duration ' + s); + }; + const formatDuration = (v) => { + if (v === 0) { + return ''; + } + const is = (period) => v > 0 && Math.round(v / period) === v / period; + const format = (period, s) => '' + (v / period) + s; + if (is(week)) { + return format(week, 'w'); + } + if (is(day)) { + return format(day, 'd'); + } + if (is(hour)) { + return format(hour, 'h'); + } + if (is(minute)) { + return format(minute, 'm'); + } + return format(second, 's'); + }; let importForm; let importFieldset; let mailboxFileHint; let mailboxPrefixHint; let importProgress; let importAbortBox; + let suppressionAddress; + let suppressionReason; const importTrack = async (token) => { const importConnection = dom.div('Waiting for updates...'); importProgress.appendChild(importConnection); @@ -952,6 +1211,139 @@ const index = async () => { const exportForm = (filename) => { return dom.form(attr.target('_blank'), attr.method('POST'), attr.action('export/' + filename), dom.input(attr.type('hidden'), attr.name('csrf'), attr.value(localStorageGet('webaccountcsrftoken') || '')), dom.submitbutton('Export')); }; + const authorizationPopup = (dest) => { + let username; + let password; + const close = popup(dom.form(function submit(e) { + e.preventDefault(); + e.stopPropagation(); + dest.value = 'Basic ' + window.btoa(username.value + ':' + password.value); + close(); + }, dom.p('Compose HTTP Basic authentication header'), dom.div(style({ marginBottom: '1ex' }), dom.div(dom.label('Username')), username = dom.input(attr.required(''))), dom.div(style({ marginBottom: '1ex' }), dom.div(dom.label('Password (shown in clear)')), password = dom.input(attr.required(''))), dom.div(style({ marginBottom: '1ex' }), dom.submitbutton('Set')), dom.div('A HTTP Basic authorization header contains the password in plain text, as base64.'))); + username.focus(); + }; + const popupTestOutgoing = () => { + let fieldset; + let event; + let dsn; + let suppressing; + let queueMsgID; + let fromID; + let messageID; + let error; + let extra; + let body; + let curl; + let result; + let data = { + Version: 0, + Event: api.OutgoingEvent.EventDelivered, + DSN: false, + Suppressing: false, + QueueMsgID: 123, + FromID: 'MDEyMzQ1Njc4OWFiY2RlZg', + MessageID: '', + Subject: 'test from mox web pages', + WebhookQueued: new Date(), + SMTPCode: 0, + SMTPEnhancedCode: '', + Error: '', + Extra: {}, + }; + const onchange = function change() { + data = { + Version: 0, + Event: event.value, + DSN: dsn.checked, + Suppressing: suppressing.checked, + QueueMsgID: parseInt(queueMsgID.value), + FromID: fromID.value, + MessageID: messageID.value, + Subject: 'test from mox web pages', + WebhookQueued: new Date(), + SMTPCode: 0, + SMTPEnhancedCode: '', + Error: error.value, + Extra: JSON.parse(extra.value), + }; + const curlStr = "curl " + (outgoingWebhookAuthorization.value ? "-H 'Authorization: " + outgoingWebhookAuthorization.value + "' " : "") + "-H 'X-Mox-Webhook-ID: 1' -H 'X-Mox-Webhook-Attempt: 1' --json '" + JSON.stringify(data) + "' '" + outgoingWebhookURL.value + "'"; + dom._kids(curl, style({ maxWidth: '45em', wordBreak: 'break-all' }), curlStr); + body.value = JSON.stringify(data, undefined, "\t"); + }; + popup(dom.h1('Test webhook for outgoing delivery'), dom.form(async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + result.classList.add('loadstart'); + const [code, response, errmsg] = await check(fieldset, client.OutgoingWebhookTest(outgoingWebhookURL.value, outgoingWebhookAuthorization.value, data)); + const nresult = dom.div(dom._class('loadend'), dom.table(dom.tr(dom.td('HTTP status code'), dom.td('' + code)), dom.tr(dom.td('Error message'), dom.td(errmsg)), dom.tr(dom.td('Response'), dom.td(response)))); + result.replaceWith(nresult); + result = nresult; + }, fieldset = dom.fieldset(dom.p('Make a test call to ', dom.b(outgoingWebhookURL.value), '.'), dom.div(style({ display: 'flex', gap: '1em' }), dom.div(dom.h2('Parameters'), dom.div(style({ marginBottom: '.5ex' }), dom.label('Event', dom.div(event = dom.select(onchange, ["delivered", "suppressed", "delayed", "failed", "relayed", "expanded", "canceled", "unrecognized"].map(s => dom.option(s.substring(0, 1).toUpperCase() + s.substring(1), attr.value(s))))))), dom.div(style({ marginBottom: '.5ex' }), dom.label(dsn = dom.input(attr.type('checkbox')), ' DSN', onchange)), dom.div(style({ marginBottom: '.5ex' }), dom.label(suppressing = dom.input(attr.type('checkbox')), ' Suppressing', onchange)), dom.div(style({ marginBottom: '.5ex' }), dom.label('Queue message ID ', dom.div(queueMsgID = dom.input(attr.required(''), attr.type('number'), attr.value('123'), onchange)))), dom.div(style({ marginBottom: '.5ex' }), dom.label('From ID ', dom.div(fromID = dom.input(attr.required(''), attr.value(data.FromID), onchange)))), dom.div(style({ marginBottom: '.5ex' }), dom.label('MessageID', dom.div(messageID = dom.input(attr.required(''), attr.value(data.MessageID), onchange)))), dom.div(style({ marginBottom: '.5ex' }), dom.label('Error', dom.div(error = dom.input(onchange)))), dom.div(style({ marginBottom: '.5ex' }), dom.label('Extra', dom.div(extra = dom.input(attr.required(''), attr.value('{}'), onchange))))), dom.div(dom.h2('Headers'), dom.pre('X-Mox-Webhook-ID: 1\nX-Mox-Webhook-Attempt: 1'), dom.br(), dom.h2('JSON'), body = dom.textarea(attr.disabled(''), attr.rows('15'), style({ width: '30em' })), dom.br(), dom.h2('curl'), curl = dom.div(dom._class('literal')))), dom.br(), dom.div(style({ textAlign: 'right' }), dom.submitbutton('Post')), dom.br(), result = dom.div()))); + onchange(); + }; + const popupTestIncoming = () => { + let fieldset; + let body; + let curl; + let result; + let data = { + Version: 0, + From: [{ Name: 'remote', Address: 'remote@remote.example' }], + To: [{ Name: 'mox', Address: 'mox@mox.example' }], + CC: [], + BCC: [], + ReplyTo: [], + Subject: 'test webhook for incoming message', + MessageID: '', + InReplyTo: '', + References: [], + Date: new Date(), + Text: 'hi ☺\n', + HTML: '', + Structure: { + ContentType: 'text/plain', + ContentTypeParams: { charset: 'utf-8' }, + ContentID: '', + DecodedSize: 8, + Parts: [], + }, + Meta: { + MsgID: 1, + MailFrom: 'remote@remote.example', + MailFromValidated: true, + MsgFromValidated: true, + RcptTo: 'mox@localhost', + DKIMVerifiedDomains: ['remote.example'], + RemoteIP: '127.0.0.1', + Received: new Date(), + MailboxName: 'Inbox', + Automated: false, + }, + }; + const onchange = function change() { + try { + api.parser.Incoming(JSON.parse(body.value)); + } + catch (err) { + console.log({ err }); + window.alert('Error parsing data: ' + errmsg(err)); + } + const curlStr = "curl " + (incomingWebhookAuthorization.value ? "-H 'Authorization: " + incomingWebhookAuthorization.value + "' " : "") + "-H 'X-Mox-Webhook-ID: 1' -H 'X-Mox-Webhook-Attempt: 1' --json '" + JSON.stringify(data) + "' '" + incomingWebhookURL.value + "'"; + dom._kids(curl, style({ maxWidth: '45em', wordBreak: 'break-all' }), curlStr); + }; + popup(dom.h1('Test webhook for incoming delivery'), dom.form(async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + result.classList.add('loadstart'); + const [code, response, errmsg] = await check(fieldset, (async () => await client.IncomingWebhookTest(incomingWebhookURL.value, incomingWebhookAuthorization.value, api.parser.Incoming(JSON.parse(body.value))))()); + const nresult = dom.div(dom._class('loadend'), dom.table(dom.tr(dom.td('HTTP status code'), dom.td('' + code)), dom.tr(dom.td('Error message'), dom.td(errmsg)), dom.tr(dom.td('Response'), dom.td(response)))); + result.replaceWith(nresult); + result = nresult; + }, fieldset = dom.fieldset(dom.p('Make a test call to ', dom.b(incomingWebhookURL.value), '.'), dom.div(style({ display: 'flex', gap: '1em' }), dom.div(dom.h2('JSON'), body = dom.textarea(style({ maxHeight: '90vh' }), style({ width: '30em' }), onchange)), dom.div(dom.h2('Headers'), dom.pre('X-Mox-Webhook-ID: 1\nX-Mox-Webhook-Attempt: 1'), dom.br(), dom.h2('curl'), curl = dom.div(dom._class('literal')))), dom.br(), dom.div(style({ textAlign: 'right' }), dom.submitbutton('Post')), dom.br(), result = dom.div()))); + body.value = JSON.stringify(data, undefined, '\t'); + body.setAttribute('rows', '' + Math.min(40, (body.value.split('\n').length + 1))); + onchange(); + }; dom._kids(page, crumbs('Mox Account'), dom.p('NOTE: Not all account settings can be configured through these pages yet. See the configuration file for more options.'), dom.div('Default domain: ', acc.DNSDomain.ASCII ? domainString(acc.DNSDomain) : '(none)'), dom.br(), fullNameForm = dom.form(fullNameFieldset = dom.fieldset(dom.label(style({ display: 'inline-block' }), 'Full name', dom.br(), fullName = dom.input(attr.value(acc.FullName), attr.title('Name to use in From header when composing messages. Can be overridden per configured address.'))), ' ', dom.submitbutton('Save')), async function submit(e) { e.preventDefault(); await check(fullNameFieldset, client.AccountSaveFullName(fullName.value)); @@ -989,7 +1381,67 @@ const index = async () => { ' (', '' + Math.floor(100 * storageUsed / storageLimit), '%).', - ] : [', no explicit limit is configured.']), dom.h2('Export'), dom.p('Export all messages in all mailboxes. In maildir or mbox format, as .zip or .tgz file.'), dom.table(dom._class('slim'), dom.tr(dom.td('Maildirs in .tgz'), dom.td(exportForm('mail-export-maildir.tgz'))), dom.tr(dom.td('Maildirs in .zip'), dom.td(exportForm('mail-export-maildir.zip'))), dom.tr(dom.td('Mbox files in .tgz'), dom.td(exportForm('mail-export-mbox.tgz'))), dom.tr(dom.td('Mbox files in .zip'), dom.td(exportForm('mail-export-mbox.zip')))), dom.br(), dom.h2('Import'), dom.p('Import messages from a .zip or .tgz file with maildirs and/or mbox files.'), importForm = dom.form(async function submit(e) { + ] : [', no explicit limit is configured.']), dom.h2('Webhooks'), dom.h3('Outgoing', attr.title('Webhooks for outgoing messages are called for each attempt to deliver a message in the outgoing queue, e.g. when the queue has delivered a message to the next hop, when a single attempt failed with a temporary error, when delivery permanently failed, or when DSN (delivery status notification) messages were received about a previously sent message.')), dom.form(async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + await check(outgoingWebhookFieldset, client.OutgoingWebhookSave(outgoingWebhookURL.value, outgoingWebhookAuthorization.value, [...outgoingWebhookEvents.selectedOptions].map(o => o.value))); + }, outgoingWebhookFieldset = dom.fieldset(dom.div(style({ display: 'flex', gap: '1em' }), dom.div(dom.label(dom.div('URL', attr.title('URL to do an HTTP POST to for each event. Webhooks are disabled if empty.')), outgoingWebhookURL = dom.input(attr.value(acc.OutgoingWebhook?.URL || ''), style({ width: '30em' })))), dom.div(dom.label(dom.div('Authorization header ', dom.a('Basic', attr.href(''), function click(e) { + e.preventDefault(); + authorizationPopup(outgoingWebhookAuthorization); + }), attr.title('If non-empty, HTTP requests have this value as Authorization header, e.g. Basic .')), outgoingWebhookAuthorization = dom.input(attr.value(acc.OutgoingWebhook?.Authorization || '')))), dom.div(dom.label(style({ verticalAlign: 'top' }), dom.div('Events', attr.title('Either limit to specific events, or receive all events (default).')), outgoingWebhookEvents = dom.select(style({ verticalAlign: 'bottom' }), attr.multiple(''), attr.size('8'), // Number of options. + ["delivered", "suppressed", "delayed", "failed", "relayed", "expanded", "canceled", "unrecognized"].map(s => dom.option(s.substring(0, 1).toUpperCase() + s.substring(1), attr.value(s), acc.OutgoingWebhook?.Events?.includes(s) ? attr.selected('') : []))))), dom.div(dom.div(dom.label('\u00a0')), dom.submitbutton('Save'), ' ', dom.clickbutton('Test', function click() { + popupTestOutgoing(); + }))))), dom.br(), dom.h3('Incoming', attr.title('Webhooks for incoming messages are called for each message received over SMTP, excluding DSN messages about previous deliveries.')), dom.form(async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + await check(incomingWebhookFieldset, client.IncomingWebhookSave(incomingWebhookURL.value, incomingWebhookAuthorization.value)); + }, incomingWebhookFieldset = dom.fieldset(dom.div(style({ display: 'flex', gap: '1em' }), dom.div(dom.label(dom.div('URL'), incomingWebhookURL = dom.input(attr.value(acc.IncomingWebhook?.URL || ''), style({ width: '30em' })))), dom.div(dom.label(dom.div('Authorization header ', dom.a('Basic', attr.href(''), function click(e) { + e.preventDefault(); + authorizationPopup(incomingWebhookAuthorization); + }), attr.title('If non-empty, HTTP requests have this value as Authorization header, e.g. Basic .')), incomingWebhookAuthorization = dom.input(attr.value(acc.IncomingWebhook?.Authorization || '')))), dom.div(dom.div(dom.label('\u00a0')), dom.submitbutton('Save'), ' ', dom.clickbutton('Test', function click() { + popupTestIncoming(); + }))))), dom.br(), dom.h2('Keep messages/webhooks retired from queue', attr.title('After delivering a message or webhook from the queue it is removed by default. But you can also keep these "retired" messages/webhooks around for a while. With unique SMTP MAIL FROM addresses configured below, this allows relating incoming delivery status notification messages (DSNs) to previously sent messages and their original recipients, which is needed for automatic management of recipient suppression lists, which is important for managing the reputation of your mail server. For both messages and webhooks, this can be useful for debugging. Use values like "3d" for 3 days, or units "s" for second, "m" for minute, "h" for hour, "w" for week.')), dom.form(async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + await check(keepRetiredPeriodsFieldset, (async () => await client.KeepRetiredPeriodsSave(parseDuration(keepRetiredMessagePeriod.value), parseDuration(keepRetiredWebhookPeriod.value)))()); + }, keepRetiredPeriodsFieldset = dom.fieldset(dom.div(style({ display: 'flex', gap: '1em', alignItems: 'flex-end' }), dom.div(dom.label('Messages deliveries', dom.br(), keepRetiredMessagePeriod = dom.input(attr.value(formatDuration(acc.KeepRetiredMessagePeriod))))), dom.div(dom.label('Webhook deliveries', dom.br(), keepRetiredWebhookPeriod = dom.input(attr.value(formatDuration(acc.KeepRetiredWebhookPeriod))))), dom.div(dom.submitbutton('Save'))))), dom.br(), dom.h2('Unique SMTP MAIL FROM login addresses', attr.title('Outgoing messages are normally sent using your email address in the SMTP MAIL FROM command. By using unique addresses (by using the localpart catchall separator, e.g. addresses of the form "localpart+@domain"), future incoming DSNs can be related to the original outgoing messages and recipients, which allows for automatic management of recipient suppression lists when keeping retired messages for as long as you expect DSNs to come in as configured above. Configure the addresses used for logging in with SMTP submission, the webapi or webmail for which unique SMTP MAIL FROM addesses should be enabled. Note: These are addresses used for authenticating, not the address in the message "From" header.')), (() => { + let inputs = []; + let elem; + const render = () => { + inputs = []; + const e = dom.form(async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + await check(fromIDLoginAddressesFieldset, client.FromIDLoginAddressesSave(inputs.map(e => e.value))); + }, fromIDLoginAddressesFieldset = dom.fieldset(dom.table(dom.tbody((acc.FromIDLoginAddresses || []).length === 0 ? dom.tr(dom.td('(None)'), dom.td()) : [], (acc.FromIDLoginAddresses || []).map((s, index) => { + const input = dom.input(attr.required(''), attr.value(s)); + inputs.push(input); + const x = dom.tr(dom.td(input), dom.td(dom.clickbutton('Remove', function click() { + acc.FromIDLoginAddresses.splice(index, 1); + render(); + }))); + return x; + })), dom.tfoot(dom.tr(dom.td(), dom.td(dom.clickbutton('Add', function click() { + acc.FromIDLoginAddresses = (acc.FromIDLoginAddresses || []).concat(['']); + render(); + }))), dom.tr(dom.td(attr.colspan('2'), dom.submitbutton('Save'))))))); + if (elem) { + elem.replaceWith(e); + elem = e; + } + return e; + }; + elem = render(); + return elem; + })(), dom.br(), dom.h2('Suppression list'), dom.p('Messages queued for delivery to recipients on the suppression list will immediately fail. If delivery to a recipient fails repeatedly, it can be added to the suppression list automatically. Repeated rejected delivery attempts can have a negative influence of mail server reputation. Applications sending email can implement their own handling of delivery failure notifications, but not all do.'), dom.form(attr.id('suppressionAdd'), async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + await check(e.target, client.SuppressionAdd(suppressionAddress.value, true, suppressionReason.value)); + window.location.reload(); // todo: reload less + }), dom.table(dom.thead(dom.tr(dom.th('Address', attr.title('Address that caused this entry to be added to the list. The title (shown on hover) displays an address with a fictional simplified localpart, with lower-cased, dots removed, only first part before "+" or "-" (typicaly catchall separators). When checking if an address is on the suppression list, it is checked against this address.')), dom.th('Manual', attr.title('Whether suppression was added manually, instead of automatically based on bounces.')), dom.th('Reason'), dom.th('Since'), dom.th('Action'))), dom.tbody((suppressions || []).length === 0 ? dom.tr(dom.td(attr.colspan('5'), '(None)')) : [], (suppressions || []).map(s => dom.tr(dom.td(s.OriginalAddress, attr.title(s.BaseAddress)), dom.td(s.Manual ? '✓' : ''), dom.td(s.Reason), dom.td(age(s.Created)), dom.td(dom.clickbutton('Remove', async function click(e) { + await check(e.target, client.SuppressionRemove(s.OriginalAddress)); + window.location.reload(); // todo: reload less + }))))), dom.tfoot(dom.tr(dom.td(suppressionAddress = dom.input(attr.type('required'), attr.form('suppressionAdd'))), dom.td(), dom.td(suppressionReason = dom.input(style({ width: '100%' }), attr.form('suppressionAdd'))), dom.td(), dom.td(dom.submitbutton('Add suppression', attr.form('suppressionAdd')))))), dom.br(), dom.h2('Export'), dom.p('Export all messages in all mailboxes. In maildir or mbox format, as .zip or .tgz file.'), dom.table(dom._class('slim'), dom.tr(dom.td('Maildirs in .tgz'), dom.td(exportForm('mail-export-maildir.tgz'))), dom.tr(dom.td('Maildirs in .zip'), dom.td(exportForm('mail-export-maildir.zip'))), dom.tr(dom.td('Mbox files in .tgz'), dom.td(exportForm('mail-export-mbox.tgz'))), dom.tr(dom.td('Mbox files in .zip'), dom.td(exportForm('mail-export-mbox.zip')))), dom.br(), dom.h2('Import'), dom.p('Import messages from a .zip or .tgz file with maildirs and/or mbox files.'), importForm = dom.form(async function submit(e) { e.preventDefault(); e.stopPropagation(); const request = async () => { @@ -1054,7 +1506,7 @@ const index = async () => { })), mailboxFileHint = dom.p(style({ display: 'none', fontStyle: 'italic', marginTop: '.5ex' }), 'This file must either be a zip file or a gzipped tar file with mbox and/or maildir mailboxes. For maildirs, an optional file "dovecot-keywords" is read additional keywords, like Forwarded/Junk/NotJunk. If an imported mailbox already exists by name, messages are added to the existing mailbox. If a mailbox does not yet exist it will be created.')), dom.div(style({ marginBottom: '1ex' }), dom.label(dom.div(style({ marginBottom: '.5ex' }), 'Skip mailbox prefix (optional)'), dom.input(attr.name('skipMailboxPrefix'), function focus() { mailboxPrefixHint.style.display = ''; })), mailboxPrefixHint = dom.p(style({ display: 'none', fontStyle: 'italic', marginTop: '.5ex' }), 'If set, any mbox/maildir path with this prefix will have it stripped before importing. For example, if all mailboxes are in a directory "Takeout", specify that path in the field above so mailboxes like "Takeout/Inbox.mbox" are imported into a mailbox called "Inbox" instead of "Takeout/Inbox".')), dom.div(dom.submitbutton('Upload and import'), dom.p(style({ fontStyle: 'italic', marginTop: '.5ex' }), 'The file is uploaded first, then its messages are imported, finally messages are matched for threading. Importing is done in a transaction, you can abort the entire import before it is finished.')))), importAbortBox = dom.div(), // Outside fieldset because it gets disabled, above progress because may be scrolling it down quickly with problems. - importProgress = dom.div(style({ display: 'none' })), footer); + importProgress = dom.div(style({ display: 'none' })), dom.br(), footer); // Try to show the progress of an earlier import session. The user may have just // refreshed the browser. let importToken; diff --git a/webaccount/account.ts b/webaccount/account.ts index 6ab7b8a..04fa303 100644 --- a/webaccount/account.ts +++ b/webaccount/account.ts @@ -84,6 +84,48 @@ const login = async (reason: string) => { }) } +// Popup shows kids in a centered div with white background on top of a +// transparent overlay on top of the window. Clicking the overlay or hitting +// Escape closes the popup. Scrollbars are automatically added to the div with +// kids. Returns a function that removes the popup. +const popup = (...kids: ElemArg[]) => { + const origFocus = document.activeElement + const close = () => { + if (!root.parentNode) { + return + } + root.remove() + if (origFocus && origFocus instanceof HTMLElement && origFocus.parentNode) { + origFocus.focus() + } + } + let content: HTMLElement + const root = dom.div( + style({position: 'fixed', top: 0, right: 0, bottom: 0, left: 0, backgroundColor: 'rgba(0, 0, 0, 0.1)', display: 'flex', alignItems: 'center', justifyContent: 'center', zIndex: '1'}), + function keydown(e: KeyboardEvent) { + if (e.key === 'Escape') { + e.stopPropagation() + close() + } + }, + function click(e: MouseEvent) { + e.stopPropagation() + close() + }, + content=dom.div( + attr.tabindex('0'), + style({backgroundColor: 'white', borderRadius: '.25em', padding: '1em', boxShadow: '0 0 20px rgba(0, 0, 0, 0.1)', border: '1px solid #ddd', maxWidth: '95vw', overflowX: 'auto', maxHeight: '95vh', overflowY: 'auto'}), + function click(e: MouseEvent) { + e.stopPropagation() + }, + kids, + ) + ) + document.body.appendChild(root) + content.focus() + return close +} + const localStorageGet = (k: string): string | null => { try { return window.localStorage.getItem(k) @@ -195,6 +237,42 @@ const yellow = '#ffe400' const red = '#ff7443' const blue = '#8bc8ff' +const age = (date: Date) => { + const r = dom.span(dom._class('notooltip'), attr.title(date.toString())) + const nowSecs = new Date().getTime()/1000 + let t = nowSecs - date.getTime()/1000 + let negative = '' + if (t < 0) { + negative = '-' + t = -t + } + const minute = 60 + const hour = 60*minute + const day = 24*hour + const month = 30*day + const year = 365*day + const periods = [year, month, day, hour, minute] + const suffix = ['y', 'mo', 'd', 'h', 'min'] + let s + for (let i = 0; i < periods.length; i++) { + const p = periods[i] + if (t >= 2*p || i === periods.length-1) { + const n = Math.round(t/p) + s = '' + n + suffix[i] + break + } + } + if (t < 60) { + s = '<1min' + // Prevent showing '-<1min' when browser and server have relatively small time drift of max 1 minute. + negative = '' + } + + dom._kids(r, negative+s) + return r +} + + const formatQuotaSize = (v: number) => { if (v === 0) { return '0' @@ -213,7 +291,7 @@ const formatQuotaSize = (v: number) => { } const index = async () => { - const [acc, storageUsed, storageLimit] = await client.Account() + const [acc, storageUsed, storageLimit, suppressions] = await client.Account() let fullNameForm: HTMLFormElement let fullNameFieldset: HTMLFieldSetElement @@ -224,6 +302,55 @@ const index = async () => { let password2: HTMLInputElement let passwordHint: HTMLElement + let outgoingWebhookFieldset: HTMLFieldSetElement + let outgoingWebhookURL: HTMLInputElement + let outgoingWebhookAuthorization: HTMLInputElement + let outgoingWebhookEvents: HTMLSelectElement + + let incomingWebhookFieldset: HTMLFieldSetElement + let incomingWebhookURL: HTMLInputElement + let incomingWebhookAuthorization: HTMLInputElement + + let keepRetiredPeriodsFieldset: HTMLFieldSetElement + let keepRetiredMessagePeriod: HTMLInputElement + let keepRetiredWebhookPeriod: HTMLInputElement + + let fromIDLoginAddressesFieldset: HTMLFieldSetElement + + const second = 1000*1000*1000 + const minute = 60*second + const hour = 60*minute + const day = 24*hour + const week = 7*day + const parseDuration = (s: string) => { + if (!s) { return 0 } + const xparseint = () => { + const v = parseInt(s.substring(0, s.length-1)) + if (isNaN(v) || Math.round(v) !== v) { + throw new Error('bad number in duration') + } + return v + } + if (s.endsWith('w')) { return xparseint()*week } + if (s.endsWith('d')) { return xparseint()*day } + if (s.endsWith('h')) { return xparseint()*hour } + if (s.endsWith('m')) { return xparseint()*minute } + if (s.endsWith('s')) { return xparseint()*second } + throw new Error('bad duration '+s) + } + const formatDuration = (v: number) => { + if (v === 0) { + return '' + } + const is = (period: number) => v > 0 && Math.round(v/period) === v/period + const format = (period: number, s: string) => ''+(v/period)+s + if (is(week)) { return format(week, 'w') } + if (is(day)) { return format(day, 'd') } + if (is(hour)) { return format(hour, 'h') } + if (is(minute)) { return format(minute, 'm') } + return format(second, 's') + } + let importForm: HTMLFormElement let importFieldset: HTMLFieldSetElement let mailboxFileHint: HTMLElement @@ -231,6 +358,9 @@ const index = async () => { let importProgress: HTMLElement let importAbortBox: HTMLElement + let suppressionAddress: HTMLInputElement + let suppressionReason: HTMLInputElement + const importTrack = async (token: string) => { const importConnection = dom.div('Waiting for updates...') importProgress.appendChild(importConnection) @@ -345,6 +475,252 @@ const index = async () => { ) } + const authorizationPopup = (dest: HTMLInputElement) => { + let username: HTMLInputElement + let password: HTMLInputElement + const close = popup( + dom.form( + function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + dest.value = 'Basic '+window.btoa(username.value+':'+password.value) + close() + }, + dom.p('Compose HTTP Basic authentication header'), + dom.div( + style({marginBottom: '1ex'}), + dom.div(dom.label('Username')), + username=dom.input(attr.required('')), + ), + dom.div( + style({marginBottom: '1ex'}), + dom.div(dom.label('Password (shown in clear)')), + password=dom.input(attr.required('')), + ), + dom.div( + style({marginBottom: '1ex'}), + dom.submitbutton('Set'), + ), + dom.div('A HTTP Basic authorization header contains the password in plain text, as base64.'), + ), + ) + username.focus() + } + + const popupTestOutgoing = () => { + let fieldset: HTMLFieldSetElement + let event: HTMLSelectElement + let dsn: HTMLInputElement + let suppressing: HTMLInputElement + let queueMsgID: HTMLInputElement + let fromID: HTMLInputElement + let messageID: HTMLInputElement + let error: HTMLInputElement + let extra: HTMLInputElement + let body: HTMLTextAreaElement + let curl: HTMLElement + let result: HTMLElement + + let data: api.Outgoing = { + Version: 0, + Event: api.OutgoingEvent.EventDelivered, + DSN: false, + Suppressing: false, + QueueMsgID: 123, + FromID: 'MDEyMzQ1Njc4OWFiY2RlZg', + MessageID: '', + Subject: 'test from mox web pages', + WebhookQueued: new Date(), + SMTPCode: 0, + SMTPEnhancedCode: '', + Error: '', + Extra: {}, + } + const onchange = function change() { + data = { + Version: 0, + Event: event.value as api.OutgoingEvent, + DSN: dsn.checked, + Suppressing: suppressing.checked, + QueueMsgID: parseInt(queueMsgID.value), + FromID: fromID.value, + MessageID: messageID.value, + Subject: 'test from mox web pages', + WebhookQueued: new Date(), + SMTPCode: 0, + SMTPEnhancedCode: '', + Error: error.value, + Extra: JSON.parse(extra.value), + } + const curlStr = "curl " + (outgoingWebhookAuthorization.value ? "-H 'Authorization: "+outgoingWebhookAuthorization.value+"' " : "") + "-H 'X-Mox-Webhook-ID: 1' -H 'X-Mox-Webhook-Attempt: 1' --json '"+JSON.stringify(data)+"' '"+outgoingWebhookURL.value+"'" + dom._kids(curl, style({maxWidth: '45em', wordBreak: 'break-all'}), curlStr) + body.value = JSON.stringify(data, undefined, "\t") + } + + popup( + dom.h1('Test webhook for outgoing delivery'), + dom.form( + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + result.classList.add('loadstart') + const [code, response, errmsg] = await check(fieldset, client.OutgoingWebhookTest(outgoingWebhookURL.value, outgoingWebhookAuthorization.value, data)) + const nresult = dom.div( + dom._class('loadend'), + dom.table( + dom.tr(dom.td('HTTP status code'), dom.td(''+code)), + dom.tr(dom.td('Error message'), dom.td(errmsg)), + dom.tr(dom.td('Response'), dom.td(response)), + ), + ) + result.replaceWith(nresult) + result = nresult + }, + fieldset=dom.fieldset( + dom.p('Make a test call to ', dom.b(outgoingWebhookURL.value), '.'), + dom.div(style({display: 'flex', gap: '1em'}), + dom.div( + dom.h2('Parameters'), + dom.div( + style({marginBottom: '.5ex'}), + dom.label( + 'Event', + dom.div( + event=dom.select(onchange, + ["delivered", "suppressed", "delayed", "failed", "relayed", "expanded", "canceled", "unrecognized"].map(s => dom.option(s.substring(0, 1).toUpperCase()+s.substring(1), attr.value(s))), + ), + ), + ), + ), + dom.div(style({marginBottom: '.5ex'}), dom.label(dsn=dom.input(attr.type('checkbox')), ' DSN', onchange)), + dom.div(style({marginBottom: '.5ex'}), dom.label(suppressing=dom.input(attr.type('checkbox')), ' Suppressing', onchange)), + dom.div(style({marginBottom: '.5ex'}), dom.label('Queue message ID ', dom.div(queueMsgID=dom.input(attr.required(''), attr.type('number'), attr.value('123'), onchange)))), + dom.div(style({marginBottom: '.5ex'}), dom.label('From ID ', dom.div(fromID=dom.input(attr.required(''), attr.value(data.FromID), onchange)))), + dom.div(style({marginBottom: '.5ex'}), dom.label('MessageID', dom.div(messageID=dom.input(attr.required(''), attr.value(data.MessageID), onchange)))), + dom.div(style({marginBottom: '.5ex'}), dom.label('Error', dom.div(error=dom.input(onchange)))), + dom.div(style({marginBottom: '.5ex'}), dom.label('Extra', dom.div(extra=dom.input(attr.required(''), attr.value('{}'), onchange)))), + ), + dom.div( + dom.h2('Headers'), + dom.pre('X-Mox-Webhook-ID: 1\nX-Mox-Webhook-Attempt: 1'), + dom.br(), + dom.h2('JSON'), + body=dom.textarea(attr.disabled(''), attr.rows('15'), style({width: '30em'})), + dom.br(), + dom.h2('curl'), + curl=dom.div(dom._class('literal')), + ), + ), + dom.br(), + dom.div(style({textAlign: 'right'}), dom.submitbutton('Post')), + dom.br(), + result=dom.div(), + ), + ), + ) + + onchange() + } + + const popupTestIncoming = () => { + let fieldset: HTMLFieldSetElement + let body: HTMLTextAreaElement + let curl: HTMLElement + let result: HTMLElement + + let data: api.Incoming = { + Version: 0, + From: [{Name: 'remote', Address: 'remote@remote.example'}], + To: [{Name: 'mox', Address: 'mox@mox.example'}], + CC: [], + BCC: [], + ReplyTo: [], + Subject: 'test webhook for incoming message', + MessageID: '', + InReplyTo: '', + References: [], + Date: new Date(), + Text: 'hi ☺\n', + HTML: '', + Structure: { + ContentType: 'text/plain', + ContentTypeParams: {charset: 'utf-8'}, + ContentID: '', + DecodedSize: 8, + Parts: [], + }, + Meta: { + MsgID: 1, + MailFrom: 'remote@remote.example', + MailFromValidated: true, + MsgFromValidated: true, + RcptTo: 'mox@localhost', + DKIMVerifiedDomains: ['remote.example'], + RemoteIP: '127.0.0.1', + Received: new Date(), + MailboxName: 'Inbox', + Automated: false, + }, + } + + const onchange = function change() { + try { + api.parser.Incoming(JSON.parse(body.value)) + } catch (err) { + console.log({err}) + window.alert('Error parsing data: '+errmsg(err)) + } + const curlStr = "curl " + (incomingWebhookAuthorization.value ? "-H 'Authorization: "+incomingWebhookAuthorization.value+"' " : "") + "-H 'X-Mox-Webhook-ID: 1' -H 'X-Mox-Webhook-Attempt: 1' --json '"+JSON.stringify(data)+"' '"+incomingWebhookURL.value+"'" + dom._kids(curl, style({maxWidth: '45em', wordBreak: 'break-all'}), curlStr) + } + + popup( + dom.h1('Test webhook for incoming delivery'), + dom.form( + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + result.classList.add('loadstart') + const [code, response, errmsg] = await check(fieldset, (async () => await client.IncomingWebhookTest(incomingWebhookURL.value, incomingWebhookAuthorization.value, api.parser.Incoming(JSON.parse(body.value))))()) + const nresult = dom.div( + dom._class('loadend'), + dom.table( + dom.tr(dom.td('HTTP status code'), dom.td(''+code)), + dom.tr(dom.td('Error message'), dom.td(errmsg)), + dom.tr(dom.td('Response'), dom.td(response)), + ), + ) + result.replaceWith(nresult) + result = nresult + }, + fieldset=dom.fieldset( + dom.p('Make a test call to ', dom.b(incomingWebhookURL.value), '.'), + dom.div(style({display: 'flex', gap: '1em'}), + dom.div( + dom.h2('JSON'), + body=dom.textarea(style({maxHeight: '90vh'}), style({width: '30em'}), onchange), + ), + dom.div( + dom.h2('Headers'), + dom.pre('X-Mox-Webhook-ID: 1\nX-Mox-Webhook-Attempt: 1'), + dom.br(), + + dom.h2('curl'), + curl=dom.div(dom._class('literal')), + ), + ), + dom.br(), + dom.div(style({textAlign: 'right'}), dom.submitbutton('Post')), + dom.br(), + result=dom.div(), + ), + ), + ) + body.value = JSON.stringify(data, undefined, '\t') + body.setAttribute('rows', ''+Math.min(40, (body.value.split('\n').length+1))) + onchange() + } + dom._kids(page, crumbs('Mox Account'), dom.p('NOTE: Not all account settings can be configured through these pages yet. See the configuration file for more options.'), @@ -386,6 +762,7 @@ const index = async () => { ), ), dom.br(), + dom.h2('Change password'), passwordForm=dom.form( passwordFieldset=dom.fieldset( @@ -442,6 +819,7 @@ const index = async () => { }, ), dom.br(), + dom.h2('Disk usage'), dom.p('Storage used is ', dom.b(formatQuotaSize(Math.floor(storageUsed/(1024*1024))*1024*1024)), storageLimit > 0 ? [ @@ -450,6 +828,256 @@ const index = async () => { ''+Math.floor(100*storageUsed/storageLimit), '%).', ] : [', no explicit limit is configured.']), + + dom.h2('Webhooks'), + dom.h3('Outgoing', attr.title('Webhooks for outgoing messages are called for each attempt to deliver a message in the outgoing queue, e.g. when the queue has delivered a message to the next hop, when a single attempt failed with a temporary error, when delivery permanently failed, or when DSN (delivery status notification) messages were received about a previously sent message.')), + dom.form( + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + + await check(outgoingWebhookFieldset, client.OutgoingWebhookSave(outgoingWebhookURL.value, outgoingWebhookAuthorization.value, [...outgoingWebhookEvents.selectedOptions].map(o => o.value))) + }, + outgoingWebhookFieldset=dom.fieldset( + dom.div(style({display: 'flex', gap: '1em'}), + dom.div( + dom.label( + dom.div('URL', attr.title('URL to do an HTTP POST to for each event. Webhooks are disabled if empty.')), + outgoingWebhookURL=dom.input(attr.value(acc.OutgoingWebhook?.URL || ''), style({width: '30em'})), + ), + ), + dom.div( + dom.label( + dom.div( + 'Authorization header ', + dom.a( + 'Basic', + attr.href(''), + function click(e: MouseEvent) { + e.preventDefault() + authorizationPopup(outgoingWebhookAuthorization) + }, + ), + attr.title('If non-empty, HTTP requests have this value as Authorization header, e.g. Basic .'), + ), + outgoingWebhookAuthorization=dom.input(attr.value(acc.OutgoingWebhook?.Authorization || '')), + ), + ), + dom.div( + dom.label( + style({verticalAlign: 'top'}), + dom.div('Events', attr.title('Either limit to specific events, or receive all events (default).')), + outgoingWebhookEvents=dom.select( + style({verticalAlign: 'bottom'}), + attr.multiple(''), + attr.size('8'), // Number of options. + ["delivered", "suppressed", "delayed", "failed", "relayed", "expanded", "canceled", "unrecognized"].map(s => dom.option(s.substring(0, 1).toUpperCase()+s.substring(1), attr.value(s), acc.OutgoingWebhook?.Events?.includes(s) ? attr.selected('') : [])), + ), + ), + ), + dom.div( + dom.div(dom.label('\u00a0')), + dom.submitbutton('Save'), ' ', + dom.clickbutton('Test', function click() { + popupTestOutgoing() + }), + ), + ), + ), + ), + dom.br(), + dom.h3('Incoming', attr.title('Webhooks for incoming messages are called for each message received over SMTP, excluding DSN messages about previous deliveries.')), + dom.form( + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + + await check(incomingWebhookFieldset, client.IncomingWebhookSave(incomingWebhookURL.value, incomingWebhookAuthorization.value)) + }, + incomingWebhookFieldset=dom.fieldset( + dom.div( + style({display: 'flex', gap: '1em'}), + dom.div( + dom.label( + dom.div('URL'), + incomingWebhookURL=dom.input(attr.value(acc.IncomingWebhook?.URL || ''), style({width: '30em'})), + ), + ), + dom.div( + dom.label( + dom.div( + 'Authorization header ', + dom.a( + 'Basic', + attr.href(''), + function click(e: MouseEvent) { + e.preventDefault() + authorizationPopup(incomingWebhookAuthorization) + }, + ), + attr.title('If non-empty, HTTP requests have this value as Authorization header, e.g. Basic .'), + ), + incomingWebhookAuthorization=dom.input(attr.value(acc.IncomingWebhook?.Authorization || '')), + ), + ), + dom.div( + dom.div(dom.label('\u00a0')), + dom.submitbutton('Save'), ' ', + dom.clickbutton('Test', function click() { + popupTestIncoming() + }), + ), + ), + ), + ), + dom.br(), + + dom.h2('Keep messages/webhooks retired from queue', attr.title('After delivering a message or webhook from the queue it is removed by default. But you can also keep these "retired" messages/webhooks around for a while. With unique SMTP MAIL FROM addresses configured below, this allows relating incoming delivery status notification messages (DSNs) to previously sent messages and their original recipients, which is needed for automatic management of recipient suppression lists, which is important for managing the reputation of your mail server. For both messages and webhooks, this can be useful for debugging. Use values like "3d" for 3 days, or units "s" for second, "m" for minute, "h" for hour, "w" for week.')), + dom.form( + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + + await check(keepRetiredPeriodsFieldset, (async () => await client.KeepRetiredPeriodsSave(parseDuration(keepRetiredMessagePeriod.value), parseDuration(keepRetiredWebhookPeriod.value)))()) + }, + keepRetiredPeriodsFieldset=dom.fieldset( + dom.div( + style({display: 'flex', gap: '1em', alignItems: 'flex-end'}), + dom.div( + dom.label( + 'Messages deliveries', + dom.br(), + keepRetiredMessagePeriod=dom.input(attr.value(formatDuration(acc.KeepRetiredMessagePeriod))), + ), + ), + dom.div( + dom.label( + 'Webhook deliveries', + dom.br(), + keepRetiredWebhookPeriod=dom.input(attr.value(formatDuration(acc.KeepRetiredWebhookPeriod))), + ), + ), + dom.div( + dom.submitbutton('Save'), + ), + ), + ), + ), + dom.br(), + + dom.h2('Unique SMTP MAIL FROM login addresses', attr.title('Outgoing messages are normally sent using your email address in the SMTP MAIL FROM command. By using unique addresses (by using the localpart catchall separator, e.g. addresses of the form "localpart+@domain"), future incoming DSNs can be related to the original outgoing messages and recipients, which allows for automatic management of recipient suppression lists when keeping retired messages for as long as you expect DSNs to come in as configured above. Configure the addresses used for logging in with SMTP submission, the webapi or webmail for which unique SMTP MAIL FROM addesses should be enabled. Note: These are addresses used for authenticating, not the address in the message "From" header.')), + (() => { + let inputs: HTMLInputElement[] = [] + let elem: HTMLElement + + const render = () => { + inputs = [] + + const e = dom.form( + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + + await check(fromIDLoginAddressesFieldset, client.FromIDLoginAddressesSave(inputs.map(e => e.value))) + }, + fromIDLoginAddressesFieldset=dom.fieldset( + dom.table( + dom.tbody( + (acc.FromIDLoginAddresses || []).length === 0 ? dom.tr(dom.td('(None)'), dom.td()) : [], + (acc.FromIDLoginAddresses || []).map((s, index) => { + const input = dom.input(attr.required(''), attr.value(s)) + inputs.push(input) + const x = dom.tr( + dom.td(input), + dom.td( + dom.clickbutton('Remove', function click() { + acc.FromIDLoginAddresses!.splice(index, 1) + render() + }), + ), + ) + return x + }), + ), + dom.tfoot( + dom.tr( + dom.td(), + dom.td( + dom.clickbutton('Add', function click() { + acc.FromIDLoginAddresses = (acc.FromIDLoginAddresses || []).concat(['']) + render() + }), + ), + ), + dom.tr( + dom.td(attr.colspan('2'), dom.submitbutton('Save')), + ), + ), + ), + ), + ) + if (elem) { + elem.replaceWith(e) + elem = e + } + return e + } + elem = render() + return elem + })(), + dom.br(), + + dom.h2('Suppression list'), + dom.p('Messages queued for delivery to recipients on the suppression list will immediately fail. If delivery to a recipient fails repeatedly, it can be added to the suppression list automatically. Repeated rejected delivery attempts can have a negative influence of mail server reputation. Applications sending email can implement their own handling of delivery failure notifications, but not all do.'), + dom.form( + attr.id('suppressionAdd'), + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + + await check(e.target! as HTMLButtonElement, client.SuppressionAdd(suppressionAddress.value, true, suppressionReason.value)) + window.location.reload() // todo: reload less + }, + ), + dom.table( + dom.thead( + dom.tr( + dom.th('Address', attr.title('Address that caused this entry to be added to the list. The title (shown on hover) displays an address with a fictional simplified localpart, with lower-cased, dots removed, only first part before "+" or "-" (typicaly catchall separators). When checking if an address is on the suppression list, it is checked against this address.')), + dom.th('Manual', attr.title('Whether suppression was added manually, instead of automatically based on bounces.')), + dom.th('Reason'), + dom.th('Since'), + dom.th('Action'), + ), + ), + dom.tbody( + (suppressions || []).length === 0 ? dom.tr(dom.td(attr.colspan('5'), '(None)')) : [], + (suppressions || []).map(s => + dom.tr( + dom.td(s.OriginalAddress, attr.title(s.BaseAddress)), + dom.td(s.Manual ? '✓' : ''), + dom.td(s.Reason), + dom.td(age(s.Created)), + dom.td( + dom.clickbutton('Remove', async function click(e: MouseEvent) { + await check(e.target! as HTMLButtonElement, client.SuppressionRemove(s.OriginalAddress)) + window.location.reload() // todo: reload less + }) + ), + ), + ), + ), + dom.tfoot( + dom.tr( + dom.td(suppressionAddress=dom.input(attr.type('required'), attr.form('suppressionAdd'))), + dom.td(), + dom.td(suppressionReason=dom.input(style({width: '100%'}), attr.form('suppressionAdd'))), + dom.td(), + dom.td(dom.submitbutton('Add suppression', attr.form('suppressionAdd'))), + ), + ), + ), + dom.br(), + dom.h2('Export'), dom.p('Export all messages in all mailboxes. In maildir or mbox format, as .zip or .tgz file.'), dom.table(dom._class('slim'), @@ -471,6 +1099,7 @@ const index = async () => { ), ), dom.br(), + dom.h2('Import'), dom.p('Import messages from a .zip or .tgz file with maildirs and/or mbox files.'), importForm=dom.form( @@ -570,6 +1199,8 @@ const index = async () => { importProgress=dom.div( style({display: 'none'}), ), + dom.br(), + footer, ) @@ -744,6 +1375,7 @@ const destination = async (name: string) => { fullName=dom.input(attr.value(dest.FullName)), ), dom.br(), + dom.h2('Rulesets'), dom.p('Incoming messages are checked against the rulesets. If a ruleset matches, the message is delivered to the mailbox configured for the ruleset instead of to the default mailbox.'), dom.p('"Is Forward" does not affect matching, but changes prevents the sending mail server from being included in future junk classifications by clearing fields related to the forwarding email server (IP address, EHLO domain, MAIL FROM domain and a matching DKIM domain), and prevents DMARC rejects for forwarded messages.'), diff --git a/webaccount/account_test.go b/webaccount/account_test.go index 6b2f7f5..f011bce 100644 --- a/webaccount/account_test.go +++ b/webaccount/account_test.go @@ -16,18 +16,22 @@ import ( "os" "path" "path/filepath" + "reflect" "runtime/debug" "sort" "strings" "testing" + "time" "github.com/mjl-/bstore" "github.com/mjl-/sherpa" "github.com/mjl-/mox/mlog" "github.com/mjl-/mox/mox-" + "github.com/mjl-/mox/queue" "github.com/mjl-/mox/store" "github.com/mjl-/mox/webauth" + "github.com/mjl-/mox/webhook" ) var ctxbg = context.Background() @@ -73,6 +77,13 @@ func tneedErrorCode(t *testing.T, code string, fn func()) { fn() } +func tcompare(t *testing.T, got, expect any) { + t.Helper() + if !reflect.DeepEqual(got, expect) { + t.Fatalf("got:\n%#v\nexpected:\n%#v", got, expect) + } +} + func TestAccount(t *testing.T) { os.RemoveAll("../testdata/httpaccount/data") mox.ConfigStaticPath = filepath.FromSlash("../testdata/httpaccount/mox.conf") @@ -216,7 +227,9 @@ func TestAccount(t *testing.T) { api.SetPassword(ctx, "test1234") - account, _, _ := api.Account(ctx) + err = queue.Init() // For DB. + tcheck(t, err, "queue init") + account, _, _, _ := api.Account(ctx) api.DestinationSave(ctx, "mjl☺@mox.example", account.Destinations["mjl☺@mox.example"], account.Destinations["mjl☺@mox.example"]) // todo: save modified value and compare it afterwards api.AccountSaveFullName(ctx, account.FullName+" changed") // todo: check if value was changed @@ -371,6 +384,59 @@ func TestAccount(t *testing.T) { testExport("/export/mail-export-mbox.tgz", false, 2) testExport("/export/mail-export-mbox.zip", true, 2) + sl := api.SuppressionList(ctx) + tcompare(t, len(sl), 0) + + api.SuppressionAdd(ctx, "mjl@mox.example", true, "testing") + tneedErrorCode(t, "user:error", func() { api.SuppressionAdd(ctx, "mjl@mox.example", true, "testing") }) // Duplicate. + tneedErrorCode(t, "user:error", func() { api.SuppressionAdd(ctx, "bogus", true, "testing") }) // Bad address. + + sl = api.SuppressionList(ctx) + tcompare(t, len(sl), 1) + + api.SuppressionRemove(ctx, "mjl@mox.example") + tneedErrorCode(t, "user:error", func() { api.SuppressionRemove(ctx, "mjl@mox.example") }) // Absent. + tneedErrorCode(t, "user:error", func() { api.SuppressionRemove(ctx, "bogus") }) // Not an address. + + var hooks int + hookServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + fmt.Fprintln(w, "ok") + hooks++ + })) + defer hookServer.Close() + + api.OutgoingWebhookSave(ctx, "http://localhost:1234", "Basic base64", []string{"delivered"}) + api.OutgoingWebhookSave(ctx, "http://localhost:1234", "Basic base64", []string{}) + tneedErrorCode(t, "user:error", func() { + api.OutgoingWebhookSave(ctx, "http://localhost:1234/outgoing", "Basic base64", []string{"bogus"}) + }) + tneedErrorCode(t, "user:error", func() { api.OutgoingWebhookSave(ctx, "invalid", "Basic base64", nil) }) + api.OutgoingWebhookSave(ctx, "", "", nil) // Restore. + + code, response, errmsg := api.OutgoingWebhookTest(ctx, hookServer.URL, "", webhook.Outgoing{}) + tcompare(t, code, 200) + tcompare(t, response, "ok\n") + tcompare(t, errmsg, "") + tneedErrorCode(t, "user:error", func() { api.OutgoingWebhookTest(ctx, "bogus", "", webhook.Outgoing{}) }) + + api.IncomingWebhookSave(ctx, "http://localhost:1234", "Basic base64") + tneedErrorCode(t, "user:error", func() { api.IncomingWebhookSave(ctx, "invalid", "Basic base64") }) + api.IncomingWebhookSave(ctx, "", "") // Restore. + + code, response, errmsg = api.IncomingWebhookTest(ctx, hookServer.URL, "", webhook.Incoming{}) + tcompare(t, code, 200) + tcompare(t, response, "ok\n") + tcompare(t, errmsg, "") + tneedErrorCode(t, "user:error", func() { api.IncomingWebhookTest(ctx, "bogus", "", webhook.Incoming{}) }) + + api.FromIDLoginAddressesSave(ctx, []string{"mjl☺@mox.example"}) + api.FromIDLoginAddressesSave(ctx, []string{"mjl☺@mox.example", "mjl☺+fromid@mox.example"}) + api.FromIDLoginAddressesSave(ctx, []string{}) + tneedErrorCode(t, "user:error", func() { api.FromIDLoginAddressesSave(ctx, []string{"bogus@other.example"}) }) + + api.KeepRetiredPeriodsSave(ctx, time.Minute, time.Minute) + api.KeepRetiredPeriodsSave(ctx, 0, 0) // Restore. + api.Logout(ctx) tneedErrorCode(t, "server:error", func() { api.Logout(ctx) }) } diff --git a/webaccount/api.json b/webaccount/api.json index 4997ffc..9f9a224 100644 --- a/webaccount/api.json +++ b/webaccount/api.json @@ -88,12 +88,19 @@ "Typewords": [ "int64" ] + }, + { + "Name": "suppressions", + "Typewords": [ + "[]", + "Suppression" + ] } ] }, { "Name": "AccountSaveFullName", - "Docs": "", + "Docs": "AccountSaveFullName saves the full name (used as display name in email messages)\nfor the account.", "Params": [ { "Name": "fullName", @@ -154,6 +161,231 @@ ] } ] + }, + { + "Name": "SuppressionList", + "Docs": "SuppressionList lists the addresses on the suppression list of this account.", + "Params": [], + "Returns": [ + { + "Name": "suppressions", + "Typewords": [ + "[]", + "Suppression" + ] + } + ] + }, + { + "Name": "SuppressionAdd", + "Docs": "SuppressionAdd adds an email address to the suppression list.", + "Params": [ + { + "Name": "address", + "Typewords": [ + "string" + ] + }, + { + "Name": "manual", + "Typewords": [ + "bool" + ] + }, + { + "Name": "reason", + "Typewords": [ + "string" + ] + } + ], + "Returns": [ + { + "Name": "suppression", + "Typewords": [ + "Suppression" + ] + } + ] + }, + { + "Name": "SuppressionRemove", + "Docs": "SuppressionRemove removes the email address from the suppression list.", + "Params": [ + { + "Name": "address", + "Typewords": [ + "string" + ] + } + ], + "Returns": [] + }, + { + "Name": "OutgoingWebhookSave", + "Docs": "OutgoingWebhookSave saves a new webhook url for outgoing deliveries. If url\nis empty, the webhook is disabled. If authorization is non-empty it is used for\nthe Authorization header in HTTP requests. Events specifies the outgoing events\nto be delivered, or all if empty/nil.", + "Params": [ + { + "Name": "url", + "Typewords": [ + "string" + ] + }, + { + "Name": "authorization", + "Typewords": [ + "string" + ] + }, + { + "Name": "events", + "Typewords": [ + "[]", + "string" + ] + } + ], + "Returns": [] + }, + { + "Name": "OutgoingWebhookTest", + "Docs": "OutgoingWebhookTest makes a test webhook call to urlStr, with optional\nauthorization. If the HTTP request is made this call will succeed also for\nnon-2xx HTTP status codes.", + "Params": [ + { + "Name": "urlStr", + "Typewords": [ + "string" + ] + }, + { + "Name": "authorization", + "Typewords": [ + "string" + ] + }, + { + "Name": "data", + "Typewords": [ + "Outgoing" + ] + } + ], + "Returns": [ + { + "Name": "code", + "Typewords": [ + "int32" + ] + }, + { + "Name": "response", + "Typewords": [ + "string" + ] + }, + { + "Name": "errmsg", + "Typewords": [ + "string" + ] + } + ] + }, + { + "Name": "IncomingWebhookSave", + "Docs": "IncomingWebhookSave saves a new webhook url for incoming deliveries. If url is\nempty, the webhook is disabled. If authorization is not empty, it is used in\nthe Authorization header in requests.", + "Params": [ + { + "Name": "url", + "Typewords": [ + "string" + ] + }, + { + "Name": "authorization", + "Typewords": [ + "string" + ] + } + ], + "Returns": [] + }, + { + "Name": "IncomingWebhookTest", + "Docs": "IncomingWebhookTest makes a test webhook HTTP delivery request to urlStr,\nwith optional authorization header. If the HTTP call is made, this function\nreturns non-error regardless of HTTP status code.", + "Params": [ + { + "Name": "urlStr", + "Typewords": [ + "string" + ] + }, + { + "Name": "authorization", + "Typewords": [ + "string" + ] + }, + { + "Name": "data", + "Typewords": [ + "Incoming" + ] + } + ], + "Returns": [ + { + "Name": "code", + "Typewords": [ + "int32" + ] + }, + { + "Name": "response", + "Typewords": [ + "string" + ] + }, + { + "Name": "errmsg", + "Typewords": [ + "string" + ] + } + ] + }, + { + "Name": "FromIDLoginAddressesSave", + "Docs": "FromIDLoginAddressesSave saves new login addresses to enable unique SMTP\nMAIL FROM addresses (\"fromid\") for deliveries from the queue.", + "Params": [ + { + "Name": "loginAddresses", + "Typewords": [ + "[]", + "string" + ] + } + ], + "Returns": [] + }, + { + "Name": "KeepRetiredPeriodsSave", + "Docs": "KeepRetiredPeriodsSave save periods to save retired messages and webhooks.", + "Params": [ + { + "Name": "keepRetiredMessagePeriod", + "Typewords": [ + "int64" + ] + }, + { + "Name": "keepRetiredWebhookPeriod", + "Typewords": [ + "int64" + ] + } + ], + "Returns": [] } ], "Sections": [], @@ -162,6 +394,44 @@ "Name": "Account", "Docs": "", "Fields": [ + { + "Name": "OutgoingWebhook", + "Docs": "", + "Typewords": [ + "nullable", + "OutgoingWebhook" + ] + }, + { + "Name": "IncomingWebhook", + "Docs": "", + "Typewords": [ + "nullable", + "IncomingWebhook" + ] + }, + { + "Name": "FromIDLoginAddresses", + "Docs": "", + "Typewords": [ + "[]", + "string" + ] + }, + { + "Name": "KeepRetiredMessagePeriod", + "Docs": "", + "Typewords": [ + "int64" + ] + }, + { + "Name": "KeepRetiredWebhookPeriod", + "Docs": "", + "Typewords": [ + "int64" + ] + }, { "Name": "Domain", "Docs": "", @@ -272,6 +542,54 @@ } ] }, + { + "Name": "OutgoingWebhook", + "Docs": "", + "Fields": [ + { + "Name": "URL", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Authorization", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Events", + "Docs": "", + "Typewords": [ + "[]", + "string" + ] + } + ] + }, + { + "Name": "IncomingWebhook", + "Docs": "", + "Fields": [ + { + "Name": "URL", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Authorization", + "Docs": "", + "Typewords": [ + "string" + ] + } + ] + }, { "Name": "Destination", "Docs": "", @@ -551,6 +869,61 @@ } ] }, + { + "Name": "Suppression", + "Docs": "Suppression is an address to which messages will not be delivered. Attempts to\ndeliver or queue will result in an immediate permanent failure to deliver.", + "Fields": [ + { + "Name": "ID", + "Docs": "", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Created", + "Docs": "", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "Account", + "Docs": "Suppression applies to this account only.", + "Typewords": [ + "string" + ] + }, + { + "Name": "BaseAddress", + "Docs": "Unicode. Address with fictional simplified localpart: lowercase, dots removed (gmail), first token before any \"-\" or \"+\" (typical catchall separator).", + "Typewords": [ + "string" + ] + }, + { + "Name": "OriginalAddress", + "Docs": "Unicode. Address that caused this suppression.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Manual", + "Docs": "", + "Typewords": [ + "bool" + ] + }, + { + "Name": "Reason", + "Docs": "", + "Typewords": [ + "string" + ] + } + ] + }, { "Name": "ImportProgress", "Docs": "ImportProgress is returned after uploading a file to import.", @@ -563,6 +936,362 @@ ] } ] + }, + { + "Name": "Outgoing", + "Docs": "Outgoing is the payload sent to webhook URLs for events about outgoing deliveries.", + "Fields": [ + { + "Name": "Version", + "Docs": "Format of hook, currently 0.", + "Typewords": [ + "int32" + ] + }, + { + "Name": "Event", + "Docs": "Type of outgoing delivery event.", + "Typewords": [ + "OutgoingEvent" + ] + }, + { + "Name": "DSN", + "Docs": "If this event was triggered by a delivery status notification message (DSN).", + "Typewords": [ + "bool" + ] + }, + { + "Name": "Suppressing", + "Docs": "If true, this failure caused the address to be added to the suppression list.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "QueueMsgID", + "Docs": "ID of message in queue.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "FromID", + "Docs": "As used in MAIL FROM, can be empty, for incoming messages.", + "Typewords": [ + "string" + ] + }, + { + "Name": "MessageID", + "Docs": "From Message-Id header, as set by submitter or us, with enclosing \u003c\u003e.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Subject", + "Docs": "Of original message.", + "Typewords": [ + "string" + ] + }, + { + "Name": "WebhookQueued", + "Docs": "When webhook was first queued for delivery.", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "SMTPCode", + "Docs": "Optional, for errors only, e.g. 451, 550. See package smtp for definitions.", + "Typewords": [ + "int32" + ] + }, + { + "Name": "SMTPEnhancedCode", + "Docs": "Optional, for errors only, e.g. 5.1.1.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Error", + "Docs": "Error message while delivering, or from DSN from remote, if any.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Extra", + "Docs": "Extra fields set for message during submit, through webapi call or through X-Mox-Extra-* headers during SMTP submission.", + "Typewords": [ + "{}", + "string" + ] + } + ] + }, + { + "Name": "Incoming", + "Docs": "Incoming is the data sent to a webhook for incoming deliveries over SMTP.", + "Fields": [ + { + "Name": "Version", + "Docs": "Format of hook, currently 0.", + "Typewords": [ + "int32" + ] + }, + { + "Name": "From", + "Docs": "Message \"From\" header, typically has one address.", + "Typewords": [ + "[]", + "NameAddress" + ] + }, + { + "Name": "To", + "Docs": "", + "Typewords": [ + "[]", + "NameAddress" + ] + }, + { + "Name": "CC", + "Docs": "", + "Typewords": [ + "[]", + "NameAddress" + ] + }, + { + "Name": "BCC", + "Docs": "Often empty, even if you were a BCC recipient.", + "Typewords": [ + "[]", + "NameAddress" + ] + }, + { + "Name": "ReplyTo", + "Docs": "Optional Reply-To header, typically absent or with one address.", + "Typewords": [ + "[]", + "NameAddress" + ] + }, + { + "Name": "Subject", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "MessageID", + "Docs": "Of Message-Id header, typically of the form \"\u003crandom@hostname\u003e\", includes \u003c\u003e.", + "Typewords": [ + "string" + ] + }, + { + "Name": "InReplyTo", + "Docs": "Optional, the message-id this message is a reply to. Includes \u003c\u003e.", + "Typewords": [ + "string" + ] + }, + { + "Name": "References", + "Docs": "Optional, zero or more message-ids this message is a reply/forward/related to. The last entry is the most recent/immediate message this is a reply to. Earlier entries are the parents in a thread. Values include \u003c\u003e.", + "Typewords": [ + "[]", + "string" + ] + }, + { + "Name": "Date", + "Docs": "Time in \"Date\" message header, can be different from time received.", + "Typewords": [ + "nullable", + "timestamp" + ] + }, + { + "Name": "Text", + "Docs": "Contents of text/plain and/or text/html part (if any), with \"\\n\" line-endings, converted from \"\\r\\n\". Values are truncated to 1MB (1024*1024 bytes). Use webapi MessagePartGet to retrieve the full part data.", + "Typewords": [ + "string" + ] + }, + { + "Name": "HTML", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Structure", + "Docs": "Parsed form of MIME message.", + "Typewords": [ + "Structure" + ] + }, + { + "Name": "Meta", + "Docs": "Details about message in storage, and SMTP transaction details.", + "Typewords": [ + "IncomingMeta" + ] + } + ] + }, + { + "Name": "NameAddress", + "Docs": "", + "Fields": [ + { + "Name": "Name", + "Docs": "Optional, human-readable \"display name\" of the addressee.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Address", + "Docs": "Required, email address.", + "Typewords": [ + "string" + ] + } + ] + }, + { + "Name": "Structure", + "Docs": "", + "Fields": [ + { + "Name": "ContentType", + "Docs": "Lower case, e.g. text/plain.", + "Typewords": [ + "string" + ] + }, + { + "Name": "ContentTypeParams", + "Docs": "Lower case keys, original case values, e.g. {\"charset\": \"UTF-8\"}.", + "Typewords": [ + "{}", + "string" + ] + }, + { + "Name": "ContentID", + "Docs": "Can be empty. Otherwise, should be a value wrapped in \u003c\u003e's. For use in HTML, referenced as URI `cid:...`.", + "Typewords": [ + "string" + ] + }, + { + "Name": "DecodedSize", + "Docs": "Size of content after decoding content-transfer-encoding. For text and HTML parts, this can be larger than the data returned since this size includes \\r\\n line endings.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Parts", + "Docs": "Subparts of a multipart message, possibly recursive.", + "Typewords": [ + "[]", + "Structure" + ] + } + ] + }, + { + "Name": "IncomingMeta", + "Docs": "", + "Fields": [ + { + "Name": "MsgID", + "Docs": "ID of message in storage, and to use in webapi calls like MessageGet.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "MailFrom", + "Docs": "Address used during SMTP \"MAIL FROM\" command.", + "Typewords": [ + "string" + ] + }, + { + "Name": "MailFromValidated", + "Docs": "Whether SMTP MAIL FROM address was SPF-validated.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "MsgFromValidated", + "Docs": "Whether address in message \"From\"-header was DMARC(-like) validated.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "RcptTo", + "Docs": "SMTP RCPT TO address used in SMTP.", + "Typewords": [ + "string" + ] + }, + { + "Name": "DKIMVerifiedDomains", + "Docs": "Verified domains from DKIM-signature in message. Can be different domain than used in addresses.", + "Typewords": [ + "[]", + "string" + ] + }, + { + "Name": "RemoteIP", + "Docs": "Where the message was delivered from.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Received", + "Docs": "When message was received, may be different from the Date header.", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "MailboxName", + "Docs": "Mailbox where message was delivered to, based on configured rules. Defaults to \"Inbox\".", + "Typewords": [ + "string" + ] + }, + { + "Name": "Automated", + "Docs": "Whether this message was automated and should not receive automated replies. E.g. out of office or mailing list messages.", + "Typewords": [ + "bool" + ] + } + ] } ], "Ints": [], @@ -571,6 +1300,52 @@ "Name": "CSRFToken", "Docs": "", "Values": null + }, + { + "Name": "OutgoingEvent", + "Docs": "OutgoingEvent is an activity for an outgoing delivery. Either generated by the\nqueue, or through an incoming DSN (delivery status notification) message.", + "Values": [ + { + "Name": "EventDelivered", + "Value": "delivered", + "Docs": "Message was accepted by a next-hop server. This does not necessarily mean the\nmessage has been delivered in the mailbox of the user." + }, + { + "Name": "EventSuppressed", + "Value": "suppressed", + "Docs": "Outbound delivery was suppressed because the recipient address is on the\nsuppression list of the account, or a simplified/base variant of the address is." + }, + { + "Name": "EventDelayed", + "Value": "delayed", + "Docs": "A delivery attempt failed but delivery will be retried again later." + }, + { + "Name": "EventFailed", + "Value": "failed", + "Docs": "Delivery of the message failed and will not be tried again. Also see the\n\"Suppressing\" field of [Outgoing]." + }, + { + "Name": "EventRelayed", + "Value": "relayed", + "Docs": "Message was relayed into a system that does not generate DSNs. Should only\nhappen when explicitly requested." + }, + { + "Name": "EventExpanded", + "Value": "expanded", + "Docs": "Message was accepted and is being delivered to multiple recipients (e.g. the\naddress was an alias/list), which may generate more DSNs." + }, + { + "Name": "EventCanceled", + "Value": "canceled", + "Docs": "Message was removed from the queue, e.g. canceled by admin/user." + }, + { + "Name": "EventUnrecognized", + "Value": "unrecognized", + "Docs": "An incoming message was received that was either a DSN with an unknown event\ntype (\"action\"), or an incoming non-DSN-message was received for the unique\nper-outgoing-message address used for sending." + } + ] } ], "SherpaVersion": 0, diff --git a/webaccount/api.ts b/webaccount/api.ts index 318fc77..8456905 100644 --- a/webaccount/api.ts +++ b/webaccount/api.ts @@ -3,6 +3,11 @@ namespace api { export interface Account { + OutgoingWebhook?: OutgoingWebhook | null + IncomingWebhook?: IncomingWebhook | null + FromIDLoginAddresses?: string[] | null + KeepRetiredMessagePeriod: number + KeepRetiredWebhookPeriod: number Domain: string Description: string FullName: string @@ -20,6 +25,17 @@ export interface Account { DNSDomain: Domain // Parsed form of Domain. } +export interface OutgoingWebhook { + URL: string + Authorization: string + Events?: string[] | null +} + +export interface IncomingWebhook { + URL: string + Authorization: string +} + export interface Destination { Mailbox: string Rulesets?: Ruleset[] | null @@ -78,18 +94,120 @@ export interface Route { ToDomainASCII?: string[] | null } +// Suppression is an address to which messages will not be delivered. Attempts to +// deliver or queue will result in an immediate permanent failure to deliver. +export interface Suppression { + ID: number + Created: Date + Account: string // Suppression applies to this account only. + BaseAddress: string // Unicode. Address with fictional simplified localpart: lowercase, dots removed (gmail), first token before any "-" or "+" (typical catchall separator). + OriginalAddress: string // Unicode. Address that caused this suppression. + Manual: boolean + Reason: string +} + // ImportProgress is returned after uploading a file to import. export interface ImportProgress { Token: string // For fetching progress, or cancelling an import. } +// Outgoing is the payload sent to webhook URLs for events about outgoing deliveries. +export interface Outgoing { + Version: number // Format of hook, currently 0. + Event: OutgoingEvent // Type of outgoing delivery event. + DSN: boolean // If this event was triggered by a delivery status notification message (DSN). + Suppressing: boolean // If true, this failure caused the address to be added to the suppression list. + QueueMsgID: number // ID of message in queue. + FromID: string // As used in MAIL FROM, can be empty, for incoming messages. + MessageID: string // From Message-Id header, as set by submitter or us, with enclosing <>. + Subject: string // Of original message. + WebhookQueued: Date // When webhook was first queued for delivery. + SMTPCode: number // Optional, for errors only, e.g. 451, 550. See package smtp for definitions. + SMTPEnhancedCode: string // Optional, for errors only, e.g. 5.1.1. + Error: string // Error message while delivering, or from DSN from remote, if any. + Extra?: { [key: string]: string } // Extra fields set for message during submit, through webapi call or through X-Mox-Extra-* headers during SMTP submission. +} + +// Incoming is the data sent to a webhook for incoming deliveries over SMTP. +export interface Incoming { + Version: number // Format of hook, currently 0. + From?: NameAddress[] | null // Message "From" header, typically has one address. + To?: NameAddress[] | null + CC?: NameAddress[] | null + BCC?: NameAddress[] | null // Often empty, even if you were a BCC recipient. + ReplyTo?: NameAddress[] | null // Optional Reply-To header, typically absent or with one address. + Subject: string + MessageID: string // Of Message-Id header, typically of the form "", includes <>. + InReplyTo: string // Optional, the message-id this message is a reply to. Includes <>. + References?: string[] | null // Optional, zero or more message-ids this message is a reply/forward/related to. The last entry is the most recent/immediate message this is a reply to. Earlier entries are the parents in a thread. Values include <>. + Date?: Date | null // Time in "Date" message header, can be different from time received. + Text: string // Contents of text/plain and/or text/html part (if any), with "\n" line-endings, converted from "\r\n". Values are truncated to 1MB (1024*1024 bytes). Use webapi MessagePartGet to retrieve the full part data. + HTML: string + Structure: Structure // Parsed form of MIME message. + Meta: IncomingMeta // Details about message in storage, and SMTP transaction details. +} + +export interface NameAddress { + Name: string // Optional, human-readable "display name" of the addressee. + Address: string // Required, email address. +} + +export interface Structure { + ContentType: string // Lower case, e.g. text/plain. + ContentTypeParams?: { [key: string]: string } // Lower case keys, original case values, e.g. {"charset": "UTF-8"}. + ContentID: string // Can be empty. Otherwise, should be a value wrapped in <>'s. For use in HTML, referenced as URI `cid:...`. + DecodedSize: number // Size of content after decoding content-transfer-encoding. For text and HTML parts, this can be larger than the data returned since this size includes \r\n line endings. + Parts?: Structure[] | null // Subparts of a multipart message, possibly recursive. +} + +export interface IncomingMeta { + MsgID: number // ID of message in storage, and to use in webapi calls like MessageGet. + MailFrom: string // Address used during SMTP "MAIL FROM" command. + MailFromValidated: boolean // Whether SMTP MAIL FROM address was SPF-validated. + MsgFromValidated: boolean // Whether address in message "From"-header was DMARC(-like) validated. + RcptTo: string // SMTP RCPT TO address used in SMTP. + DKIMVerifiedDomains?: string[] | null // Verified domains from DKIM-signature in message. Can be different domain than used in addresses. + RemoteIP: string // Where the message was delivered from. + Received: Date // When message was received, may be different from the Date header. + MailboxName: string // Mailbox where message was delivered to, based on configured rules. Defaults to "Inbox". + Automated: boolean // Whether this message was automated and should not receive automated replies. E.g. out of office or mailing list messages. +} + export type CSRFToken = string -export const structTypes: {[typename: string]: boolean} = {"Account":true,"AutomaticJunkFlags":true,"Destination":true,"Domain":true,"ImportProgress":true,"JunkFilter":true,"Route":true,"Ruleset":true,"SubjectPass":true} -export const stringsTypes: {[typename: string]: boolean} = {"CSRFToken":true} +// OutgoingEvent is an activity for an outgoing delivery. Either generated by the +// queue, or through an incoming DSN (delivery status notification) message. +export enum OutgoingEvent { + // Message was accepted by a next-hop server. This does not necessarily mean the + // message has been delivered in the mailbox of the user. + EventDelivered = "delivered", + // Outbound delivery was suppressed because the recipient address is on the + // suppression list of the account, or a simplified/base variant of the address is. + EventSuppressed = "suppressed", + EventDelayed = "delayed", // A delivery attempt failed but delivery will be retried again later. + // Delivery of the message failed and will not be tried again. Also see the + // "Suppressing" field of [Outgoing]. + EventFailed = "failed", + // Message was relayed into a system that does not generate DSNs. Should only + // happen when explicitly requested. + EventRelayed = "relayed", + // Message was accepted and is being delivered to multiple recipients (e.g. the + // address was an alias/list), which may generate more DSNs. + EventExpanded = "expanded", + EventCanceled = "canceled", // Message was removed from the queue, e.g. canceled by admin/user. + // An incoming message was received that was either a DSN with an unknown event + // type ("action"), or an incoming non-DSN-message was received for the unique + // per-outgoing-message address used for sending. + EventUnrecognized = "unrecognized", +} + +export const structTypes: {[typename: string]: boolean} = {"Account":true,"AutomaticJunkFlags":true,"Destination":true,"Domain":true,"ImportProgress":true,"Incoming":true,"IncomingMeta":true,"IncomingWebhook":true,"JunkFilter":true,"NameAddress":true,"Outgoing":true,"OutgoingWebhook":true,"Route":true,"Ruleset":true,"Structure":true,"SubjectPass":true,"Suppression":true} +export const stringsTypes: {[typename: string]: boolean} = {"CSRFToken":true,"OutgoingEvent":true} export const intsTypes: {[typename: string]: boolean} = {} export const types: TypenameMap = { - "Account": {"Name":"Account","Docs":"","Fields":[{"Name":"Domain","Docs":"","Typewords":["string"]},{"Name":"Description","Docs":"","Typewords":["string"]},{"Name":"FullName","Docs":"","Typewords":["string"]},{"Name":"Destinations","Docs":"","Typewords":["{}","Destination"]},{"Name":"SubjectPass","Docs":"","Typewords":["SubjectPass"]},{"Name":"QuotaMessageSize","Docs":"","Typewords":["int64"]},{"Name":"RejectsMailbox","Docs":"","Typewords":["string"]},{"Name":"KeepRejects","Docs":"","Typewords":["bool"]},{"Name":"AutomaticJunkFlags","Docs":"","Typewords":["AutomaticJunkFlags"]},{"Name":"JunkFilter","Docs":"","Typewords":["nullable","JunkFilter"]},{"Name":"MaxOutgoingMessagesPerDay","Docs":"","Typewords":["int32"]},{"Name":"MaxFirstTimeRecipientsPerDay","Docs":"","Typewords":["int32"]},{"Name":"NoFirstTimeSenderDelay","Docs":"","Typewords":["bool"]},{"Name":"Routes","Docs":"","Typewords":["[]","Route"]},{"Name":"DNSDomain","Docs":"","Typewords":["Domain"]}]}, + "Account": {"Name":"Account","Docs":"","Fields":[{"Name":"OutgoingWebhook","Docs":"","Typewords":["nullable","OutgoingWebhook"]},{"Name":"IncomingWebhook","Docs":"","Typewords":["nullable","IncomingWebhook"]},{"Name":"FromIDLoginAddresses","Docs":"","Typewords":["[]","string"]},{"Name":"KeepRetiredMessagePeriod","Docs":"","Typewords":["int64"]},{"Name":"KeepRetiredWebhookPeriod","Docs":"","Typewords":["int64"]},{"Name":"Domain","Docs":"","Typewords":["string"]},{"Name":"Description","Docs":"","Typewords":["string"]},{"Name":"FullName","Docs":"","Typewords":["string"]},{"Name":"Destinations","Docs":"","Typewords":["{}","Destination"]},{"Name":"SubjectPass","Docs":"","Typewords":["SubjectPass"]},{"Name":"QuotaMessageSize","Docs":"","Typewords":["int64"]},{"Name":"RejectsMailbox","Docs":"","Typewords":["string"]},{"Name":"KeepRejects","Docs":"","Typewords":["bool"]},{"Name":"AutomaticJunkFlags","Docs":"","Typewords":["AutomaticJunkFlags"]},{"Name":"JunkFilter","Docs":"","Typewords":["nullable","JunkFilter"]},{"Name":"MaxOutgoingMessagesPerDay","Docs":"","Typewords":["int32"]},{"Name":"MaxFirstTimeRecipientsPerDay","Docs":"","Typewords":["int32"]},{"Name":"NoFirstTimeSenderDelay","Docs":"","Typewords":["bool"]},{"Name":"Routes","Docs":"","Typewords":["[]","Route"]},{"Name":"DNSDomain","Docs":"","Typewords":["Domain"]}]}, + "OutgoingWebhook": {"Name":"OutgoingWebhook","Docs":"","Fields":[{"Name":"URL","Docs":"","Typewords":["string"]},{"Name":"Authorization","Docs":"","Typewords":["string"]},{"Name":"Events","Docs":"","Typewords":["[]","string"]}]}, + "IncomingWebhook": {"Name":"IncomingWebhook","Docs":"","Fields":[{"Name":"URL","Docs":"","Typewords":["string"]},{"Name":"Authorization","Docs":"","Typewords":["string"]}]}, "Destination": {"Name":"Destination","Docs":"","Fields":[{"Name":"Mailbox","Docs":"","Typewords":["string"]},{"Name":"Rulesets","Docs":"","Typewords":["[]","Ruleset"]},{"Name":"FullName","Docs":"","Typewords":["string"]}]}, "Ruleset": {"Name":"Ruleset","Docs":"","Fields":[{"Name":"SMTPMailFromRegexp","Docs":"","Typewords":["string"]},{"Name":"VerifiedDomain","Docs":"","Typewords":["string"]},{"Name":"HeadersRegexp","Docs":"","Typewords":["{}","string"]},{"Name":"IsForward","Docs":"","Typewords":["bool"]},{"Name":"ListAllowDomain","Docs":"","Typewords":["string"]},{"Name":"AcceptRejectsToMailbox","Docs":"","Typewords":["string"]},{"Name":"Mailbox","Docs":"","Typewords":["string"]},{"Name":"VerifiedDNSDomain","Docs":"","Typewords":["Domain"]},{"Name":"ListAllowDNSDomain","Docs":"","Typewords":["Domain"]}]}, "Domain": {"Name":"Domain","Docs":"","Fields":[{"Name":"ASCII","Docs":"","Typewords":["string"]},{"Name":"Unicode","Docs":"","Typewords":["string"]}]}, @@ -97,12 +215,21 @@ export const types: TypenameMap = { "AutomaticJunkFlags": {"Name":"AutomaticJunkFlags","Docs":"","Fields":[{"Name":"Enabled","Docs":"","Typewords":["bool"]},{"Name":"JunkMailboxRegexp","Docs":"","Typewords":["string"]},{"Name":"NeutralMailboxRegexp","Docs":"","Typewords":["string"]},{"Name":"NotJunkMailboxRegexp","Docs":"","Typewords":["string"]}]}, "JunkFilter": {"Name":"JunkFilter","Docs":"","Fields":[{"Name":"Threshold","Docs":"","Typewords":["float64"]},{"Name":"Onegrams","Docs":"","Typewords":["bool"]},{"Name":"Twograms","Docs":"","Typewords":["bool"]},{"Name":"Threegrams","Docs":"","Typewords":["bool"]},{"Name":"MaxPower","Docs":"","Typewords":["float64"]},{"Name":"TopWords","Docs":"","Typewords":["int32"]},{"Name":"IgnoreWords","Docs":"","Typewords":["float64"]},{"Name":"RareWords","Docs":"","Typewords":["int32"]}]}, "Route": {"Name":"Route","Docs":"","Fields":[{"Name":"FromDomain","Docs":"","Typewords":["[]","string"]},{"Name":"ToDomain","Docs":"","Typewords":["[]","string"]},{"Name":"MinimumAttempts","Docs":"","Typewords":["int32"]},{"Name":"Transport","Docs":"","Typewords":["string"]},{"Name":"FromDomainASCII","Docs":"","Typewords":["[]","string"]},{"Name":"ToDomainASCII","Docs":"","Typewords":["[]","string"]}]}, + "Suppression": {"Name":"Suppression","Docs":"","Fields":[{"Name":"ID","Docs":"","Typewords":["int64"]},{"Name":"Created","Docs":"","Typewords":["timestamp"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"BaseAddress","Docs":"","Typewords":["string"]},{"Name":"OriginalAddress","Docs":"","Typewords":["string"]},{"Name":"Manual","Docs":"","Typewords":["bool"]},{"Name":"Reason","Docs":"","Typewords":["string"]}]}, "ImportProgress": {"Name":"ImportProgress","Docs":"","Fields":[{"Name":"Token","Docs":"","Typewords":["string"]}]}, + "Outgoing": {"Name":"Outgoing","Docs":"","Fields":[{"Name":"Version","Docs":"","Typewords":["int32"]},{"Name":"Event","Docs":"","Typewords":["OutgoingEvent"]},{"Name":"DSN","Docs":"","Typewords":["bool"]},{"Name":"Suppressing","Docs":"","Typewords":["bool"]},{"Name":"QueueMsgID","Docs":"","Typewords":["int64"]},{"Name":"FromID","Docs":"","Typewords":["string"]},{"Name":"MessageID","Docs":"","Typewords":["string"]},{"Name":"Subject","Docs":"","Typewords":["string"]},{"Name":"WebhookQueued","Docs":"","Typewords":["timestamp"]},{"Name":"SMTPCode","Docs":"","Typewords":["int32"]},{"Name":"SMTPEnhancedCode","Docs":"","Typewords":["string"]},{"Name":"Error","Docs":"","Typewords":["string"]},{"Name":"Extra","Docs":"","Typewords":["{}","string"]}]}, + "Incoming": {"Name":"Incoming","Docs":"","Fields":[{"Name":"Version","Docs":"","Typewords":["int32"]},{"Name":"From","Docs":"","Typewords":["[]","NameAddress"]},{"Name":"To","Docs":"","Typewords":["[]","NameAddress"]},{"Name":"CC","Docs":"","Typewords":["[]","NameAddress"]},{"Name":"BCC","Docs":"","Typewords":["[]","NameAddress"]},{"Name":"ReplyTo","Docs":"","Typewords":["[]","NameAddress"]},{"Name":"Subject","Docs":"","Typewords":["string"]},{"Name":"MessageID","Docs":"","Typewords":["string"]},{"Name":"InReplyTo","Docs":"","Typewords":["string"]},{"Name":"References","Docs":"","Typewords":["[]","string"]},{"Name":"Date","Docs":"","Typewords":["nullable","timestamp"]},{"Name":"Text","Docs":"","Typewords":["string"]},{"Name":"HTML","Docs":"","Typewords":["string"]},{"Name":"Structure","Docs":"","Typewords":["Structure"]},{"Name":"Meta","Docs":"","Typewords":["IncomingMeta"]}]}, + "NameAddress": {"Name":"NameAddress","Docs":"","Fields":[{"Name":"Name","Docs":"","Typewords":["string"]},{"Name":"Address","Docs":"","Typewords":["string"]}]}, + "Structure": {"Name":"Structure","Docs":"","Fields":[{"Name":"ContentType","Docs":"","Typewords":["string"]},{"Name":"ContentTypeParams","Docs":"","Typewords":["{}","string"]},{"Name":"ContentID","Docs":"","Typewords":["string"]},{"Name":"DecodedSize","Docs":"","Typewords":["int64"]},{"Name":"Parts","Docs":"","Typewords":["[]","Structure"]}]}, + "IncomingMeta": {"Name":"IncomingMeta","Docs":"","Fields":[{"Name":"MsgID","Docs":"","Typewords":["int64"]},{"Name":"MailFrom","Docs":"","Typewords":["string"]},{"Name":"MailFromValidated","Docs":"","Typewords":["bool"]},{"Name":"MsgFromValidated","Docs":"","Typewords":["bool"]},{"Name":"RcptTo","Docs":"","Typewords":["string"]},{"Name":"DKIMVerifiedDomains","Docs":"","Typewords":["[]","string"]},{"Name":"RemoteIP","Docs":"","Typewords":["string"]},{"Name":"Received","Docs":"","Typewords":["timestamp"]},{"Name":"MailboxName","Docs":"","Typewords":["string"]},{"Name":"Automated","Docs":"","Typewords":["bool"]}]}, "CSRFToken": {"Name":"CSRFToken","Docs":"","Values":null}, + "OutgoingEvent": {"Name":"OutgoingEvent","Docs":"","Values":[{"Name":"EventDelivered","Value":"delivered","Docs":""},{"Name":"EventSuppressed","Value":"suppressed","Docs":""},{"Name":"EventDelayed","Value":"delayed","Docs":""},{"Name":"EventFailed","Value":"failed","Docs":""},{"Name":"EventRelayed","Value":"relayed","Docs":""},{"Name":"EventExpanded","Value":"expanded","Docs":""},{"Name":"EventCanceled","Value":"canceled","Docs":""},{"Name":"EventUnrecognized","Value":"unrecognized","Docs":""}]}, } export const parser = { Account: (v: any) => parse("Account", v) as Account, + OutgoingWebhook: (v: any) => parse("OutgoingWebhook", v) as OutgoingWebhook, + IncomingWebhook: (v: any) => parse("IncomingWebhook", v) as IncomingWebhook, Destination: (v: any) => parse("Destination", v) as Destination, Ruleset: (v: any) => parse("Ruleset", v) as Ruleset, Domain: (v: any) => parse("Domain", v) as Domain, @@ -110,8 +237,15 @@ export const parser = { AutomaticJunkFlags: (v: any) => parse("AutomaticJunkFlags", v) as AutomaticJunkFlags, JunkFilter: (v: any) => parse("JunkFilter", v) as JunkFilter, Route: (v: any) => parse("Route", v) as Route, + Suppression: (v: any) => parse("Suppression", v) as Suppression, ImportProgress: (v: any) => parse("ImportProgress", v) as ImportProgress, + Outgoing: (v: any) => parse("Outgoing", v) as Outgoing, + Incoming: (v: any) => parse("Incoming", v) as Incoming, + NameAddress: (v: any) => parse("NameAddress", v) as NameAddress, + Structure: (v: any) => parse("Structure", v) as Structure, + IncomingMeta: (v: any) => parse("IncomingMeta", v) as IncomingMeta, CSRFToken: (v: any) => parse("CSRFToken", v) as CSRFToken, + OutgoingEvent: (v: any) => parse("OutgoingEvent", v) as OutgoingEvent, } // Account exports web API functions for the account web interface. All its @@ -187,14 +321,16 @@ export class Client { // Account returns information about the account. // StorageUsed is the sum of the sizes of all messages, in bytes. // StorageLimit is the maximum storage that can be used, or 0 if there is no limit. - async Account(): Promise<[Account, number, number]> { + async Account(): Promise<[Account, number, number, Suppression[] | null]> { const fn: string = "Account" const paramTypes: string[][] = [] - const returnTypes: string[][] = [["Account"],["int64"],["int64"]] + const returnTypes: string[][] = [["Account"],["int64"],["int64"],["[]","Suppression"]] const params: any[] = [] - return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as [Account, number, number] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as [Account, number, number, Suppression[] | null] } + // AccountSaveFullName saves the full name (used as display name in email messages) + // for the account. async AccountSaveFullName(fullName: string): Promise { const fn: string = "AccountSaveFullName" const paramTypes: string[][] = [["string"]] @@ -232,6 +368,97 @@ export class Client { const params: any[] = [] return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as ImportProgress } + + // SuppressionList lists the addresses on the suppression list of this account. + async SuppressionList(): Promise { + const fn: string = "SuppressionList" + const paramTypes: string[][] = [] + const returnTypes: string[][] = [["[]","Suppression"]] + const params: any[] = [] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as Suppression[] | null + } + + // SuppressionAdd adds an email address to the suppression list. + async SuppressionAdd(address: string, manual: boolean, reason: string): Promise { + const fn: string = "SuppressionAdd" + const paramTypes: string[][] = [["string"],["bool"],["string"]] + const returnTypes: string[][] = [["Suppression"]] + const params: any[] = [address, manual, reason] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as Suppression + } + + // SuppressionRemove removes the email address from the suppression list. + async SuppressionRemove(address: string): Promise { + const fn: string = "SuppressionRemove" + const paramTypes: string[][] = [["string"]] + const returnTypes: string[][] = [] + const params: any[] = [address] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as void + } + + // OutgoingWebhookSave saves a new webhook url for outgoing deliveries. If url + // is empty, the webhook is disabled. If authorization is non-empty it is used for + // the Authorization header in HTTP requests. Events specifies the outgoing events + // to be delivered, or all if empty/nil. + async OutgoingWebhookSave(url: string, authorization: string, events: string[] | null): Promise { + const fn: string = "OutgoingWebhookSave" + const paramTypes: string[][] = [["string"],["string"],["[]","string"]] + const returnTypes: string[][] = [] + const params: any[] = [url, authorization, events] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as void + } + + // OutgoingWebhookTest makes a test webhook call to urlStr, with optional + // authorization. If the HTTP request is made this call will succeed also for + // non-2xx HTTP status codes. + async OutgoingWebhookTest(urlStr: string, authorization: string, data: Outgoing): Promise<[number, string, string]> { + const fn: string = "OutgoingWebhookTest" + const paramTypes: string[][] = [["string"],["string"],["Outgoing"]] + const returnTypes: string[][] = [["int32"],["string"],["string"]] + const params: any[] = [urlStr, authorization, data] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as [number, string, string] + } + + // IncomingWebhookSave saves a new webhook url for incoming deliveries. If url is + // empty, the webhook is disabled. If authorization is not empty, it is used in + // the Authorization header in requests. + async IncomingWebhookSave(url: string, authorization: string): Promise { + const fn: string = "IncomingWebhookSave" + const paramTypes: string[][] = [["string"],["string"]] + const returnTypes: string[][] = [] + const params: any[] = [url, authorization] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as void + } + + // IncomingWebhookTest makes a test webhook HTTP delivery request to urlStr, + // with optional authorization header. If the HTTP call is made, this function + // returns non-error regardless of HTTP status code. + async IncomingWebhookTest(urlStr: string, authorization: string, data: Incoming): Promise<[number, string, string]> { + const fn: string = "IncomingWebhookTest" + const paramTypes: string[][] = [["string"],["string"],["Incoming"]] + const returnTypes: string[][] = [["int32"],["string"],["string"]] + const params: any[] = [urlStr, authorization, data] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as [number, string, string] + } + + // FromIDLoginAddressesSave saves new login addresses to enable unique SMTP + // MAIL FROM addresses ("fromid") for deliveries from the queue. + async FromIDLoginAddressesSave(loginAddresses: string[] | null): Promise { + const fn: string = "FromIDLoginAddressesSave" + const paramTypes: string[][] = [["[]","string"]] + const returnTypes: string[][] = [] + const params: any[] = [loginAddresses] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as void + } + + // KeepRetiredPeriodsSave save periods to save retired messages and webhooks. + async KeepRetiredPeriodsSave(keepRetiredMessagePeriod: number, keepRetiredWebhookPeriod: number): Promise { + const fn: string = "KeepRetiredPeriodsSave" + const paramTypes: string[][] = [["int64"],["int64"]] + const returnTypes: string[][] = [] + const params: any[] = [keepRetiredMessagePeriod, keepRetiredWebhookPeriod] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as void + } } export const defaultBaseURL = (function() { diff --git a/webadmin/admin.go b/webadmin/admin.go index 3fc3009..1506f54 100644 --- a/webadmin/admin.go +++ b/webadmin/admin.go @@ -1952,7 +1952,12 @@ func (Admin) SetPassword(ctx context.Context, accountName, password string) { // AccountSettingsSave set new settings for an account that only an admin can set. func (Admin) AccountSettingsSave(ctx context.Context, accountName string, maxOutgoingMessagesPerDay, maxFirstTimeRecipientsPerDay int, maxMsgSize int64, firstTimeSenderDelay bool) { - err := mox.AccountAdminSettingsSave(ctx, accountName, maxOutgoingMessagesPerDay, maxFirstTimeRecipientsPerDay, maxMsgSize, firstTimeSenderDelay) + err := mox.AccountSave(ctx, accountName, func(acc *config.Account) { + acc.MaxOutgoingMessagesPerDay = maxOutgoingMessagesPerDay + acc.MaxFirstTimeRecipientsPerDay = maxFirstTimeRecipientsPerDay + acc.QuotaMessageSize = maxMsgSize + acc.NoFirstTimeSenderDelay = !firstTimeSenderDelay + }) xcheckf(ctx, err, "saving account settings") } @@ -2005,8 +2010,8 @@ func (Admin) QueueHoldRuleRemove(ctx context.Context, holdRuleID int64) { } // QueueList returns the messages currently in the outgoing queue. -func (Admin) QueueList(ctx context.Context, filter queue.Filter) []queue.Msg { - l, err := queue.List(ctx, filter) +func (Admin) QueueList(ctx context.Context, filter queue.Filter, sort queue.Sort) []queue.Msg { + l, err := queue.List(ctx, filter, sort) xcheckf(ctx, err, "listing messages in queue") return l } @@ -2066,6 +2071,59 @@ func (Admin) QueueTransportSet(ctx context.Context, filter queue.Filter, transpo return n } +// RetiredList returns messages retired from the queue (delivery could +// have succeeded or failed). +func (Admin) RetiredList(ctx context.Context, filter queue.RetiredFilter, sort queue.RetiredSort) []queue.MsgRetired { + l, err := queue.RetiredList(ctx, filter, sort) + xcheckf(ctx, err, "listing retired messages") + return l +} + +// HookQueueSize returns the number of webhooks still to be delivered. +func (Admin) HookQueueSize(ctx context.Context) int { + n, err := queue.HookQueueSize(ctx) + xcheckf(ctx, err, "get hook queue size") + return n +} + +// HookList lists webhooks still to be delivered. +func (Admin) HookList(ctx context.Context, filter queue.HookFilter, sort queue.HookSort) []queue.Hook { + l, err := queue.HookList(ctx, filter, sort) + xcheckf(ctx, err, "listing hook queue") + return l +} + +// HookNextAttemptSet sets a new time for next delivery attempt of matching +// hooks from the queue. +func (Admin) HookNextAttemptSet(ctx context.Context, filter queue.HookFilter, minutes int) (affected int) { + n, err := queue.HookNextAttemptSet(ctx, filter, time.Now().Add(time.Duration(minutes)*time.Minute)) + xcheckf(ctx, err, "setting new next delivery attempt time for matching webhooks in queue") + return n +} + +// HookNextAttemptAdd adds a duration to the time of next delivery attempt of +// matching hooks from the queue. +func (Admin) HookNextAttemptAdd(ctx context.Context, filter queue.HookFilter, minutes int) (affected int) { + n, err := queue.HookNextAttemptAdd(ctx, filter, time.Duration(minutes)*time.Minute) + xcheckf(ctx, err, "adding duration to next delivery attempt for matching webhooks in queue") + return n +} + +// HookRetiredList lists retired webhooks. +func (Admin) HookRetiredList(ctx context.Context, filter queue.HookRetiredFilter, sort queue.HookRetiredSort) []queue.HookRetired { + l, err := queue.HookRetiredList(ctx, filter, sort) + xcheckf(ctx, err, "listing retired hooks") + return l +} + +// HookCancel prevents further delivery attempts of matching webhooks. +func (Admin) HookCancel(ctx context.Context, filter queue.HookFilter) (affected int) { + log := pkglog.WithContext(ctx) + n, err := queue.HookCancel(ctx, log, filter) + xcheckf(ctx, err, "cancel hooks in queue") + return n +} + // LogLevels returns the current log levels. func (Admin) LogLevels(ctx context.Context) map[string]string { m := map[string]string{} diff --git a/webadmin/admin.html b/webadmin/admin.html index fc2ecda..fb00974 100644 --- a/webadmin/admin.html +++ b/webadmin/admin.html @@ -14,6 +14,7 @@ h2 { font-size: 1.1rem; } h3, h4 { font-size: 1rem; } ul { padding-left: 1rem; } .literal { background-color: #eee; padding: .5em 1em; margin: 1ex 0; border: 1px solid #eee; border-radius: 4px; white-space: pre-wrap; font-family: monospace; font-size: 15px; tab-size: 4; } +table { border-spacing: 0; } table td, table th { padding: .2em .5em; } table table td, table table th { padding: 0 0.1em; } table.long >tbody >tr >td { padding: 1em .5em; } @@ -24,10 +25,16 @@ table.hover > tbody > tr:hover { background-color: #f0f0f0; } p { margin-bottom: 1em; max-width: 50em; } [title] { text-decoration: underline; text-decoration-style: dotted; } fieldset { border: 0; } +.twocols { display: flex; gap: 2em; } +.unclutter { opacity: .5; } +.unclutter:hover { opacity: 1; } +@media (max-width:1910px) { + .twocols { display: block; gap: 2em; } +} .scriptswitch { text-decoration: underline #dca053 2px; } thead { position: sticky; top: 0; background-color: white; box-shadow: 0 1px 1px rgba(0, 0, 0, 0.1); } -#page { opacity: 1; animation: fadein 0.15s ease-in; } -#page.loading { opacity: 0.1; animation: fadeout 1s ease-out; } +#page, .loadend { opacity: 1; animation: fadein 0.15s ease-in; } +#page.loading, .loadstart { opacity: 0.1; animation: fadeout 1s ease-out; } @keyframes fadein { 0% { opacity: 0 } 100% { opacity: 1 } } @keyframes fadeout { 0% { opacity: 1 } 100% { opacity: 0.1 } } diff --git a/webadmin/admin.js b/webadmin/admin.js index b51acc2..978288a 100644 --- a/webadmin/admin.js +++ b/webadmin/admin.js @@ -220,6 +220,7 @@ const [dom, style, attr, prop] = (function () { autocomplete: (s) => _attr('autocomplete', s), list: (s) => _attr('list', s), form: (s) => _attr('form', s), + size: (s) => _attr('size', s), }; const style = (x) => { return { _styles: x }; }; const prop = (x) => { return { _props: x }; }; @@ -336,7 +337,7 @@ var api; SPFResult["SPFTemperror"] = "temperror"; SPFResult["SPFPermerror"] = "permerror"; })(SPFResult = api.SPFResult || (api.SPFResult = {})); - api.structTypes = { "Account": true, "AuthResults": true, "AutoconfCheckResult": true, "AutodiscoverCheckResult": true, "AutodiscoverSRV": true, "AutomaticJunkFlags": true, "CheckResult": true, "ClientConfigs": true, "ClientConfigsEntry": true, "DANECheckResult": true, "DKIMAuthResult": true, "DKIMCheckResult": true, "DKIMRecord": true, "DMARCCheckResult": true, "DMARCRecord": true, "DMARCSummary": true, "DNSSECResult": true, "DateRange": true, "Destination": true, "Directive": true, "Domain": true, "DomainFeedback": true, "Evaluation": true, "EvaluationStat": true, "Extension": true, "FailureDetails": true, "Filter": true, "HoldRule": true, "IPDomain": true, "IPRevCheckResult": true, "Identifiers": true, "JunkFilter": true, "MTASTSCheckResult": true, "MTASTSRecord": true, "MX": true, "MXCheckResult": true, "Modifier": true, "Msg": true, "Pair": true, "Policy": true, "PolicyEvaluated": true, "PolicyOverrideReason": true, "PolicyPublished": true, "PolicyRecord": true, "Record": true, "Report": true, "ReportMetadata": true, "ReportRecord": true, "Result": true, "ResultPolicy": true, "Reverse": true, "Route": true, "Row": true, "Ruleset": true, "SMTPAuth": true, "SPFAuthResult": true, "SPFCheckResult": true, "SPFRecord": true, "SRV": true, "SRVConfCheckResult": true, "STSMX": true, "SubjectPass": true, "Summary": true, "SuppressAddress": true, "TLSCheckResult": true, "TLSRPTCheckResult": true, "TLSRPTDateRange": true, "TLSRPTRecord": true, "TLSRPTSummary": true, "TLSRPTSuppressAddress": true, "TLSReportRecord": true, "TLSResult": true, "Transport": true, "TransportDirect": true, "TransportSMTP": true, "TransportSocks": true, "URI": true, "WebForward": true, "WebHandler": true, "WebRedirect": true, "WebStatic": true, "WebserverConfig": true }; + api.structTypes = { "Account": true, "AuthResults": true, "AutoconfCheckResult": true, "AutodiscoverCheckResult": true, "AutodiscoverSRV": true, "AutomaticJunkFlags": true, "CheckResult": true, "ClientConfigs": true, "ClientConfigsEntry": true, "DANECheckResult": true, "DKIMAuthResult": true, "DKIMCheckResult": true, "DKIMRecord": true, "DMARCCheckResult": true, "DMARCRecord": true, "DMARCSummary": true, "DNSSECResult": true, "DateRange": true, "Destination": true, "Directive": true, "Domain": true, "DomainFeedback": true, "Evaluation": true, "EvaluationStat": true, "Extension": true, "FailureDetails": true, "Filter": true, "HoldRule": true, "Hook": true, "HookFilter": true, "HookResult": true, "HookRetired": true, "HookRetiredFilter": true, "HookRetiredSort": true, "HookSort": true, "IPDomain": true, "IPRevCheckResult": true, "Identifiers": true, "IncomingWebhook": true, "JunkFilter": true, "MTASTSCheckResult": true, "MTASTSRecord": true, "MX": true, "MXCheckResult": true, "Modifier": true, "Msg": true, "MsgResult": true, "MsgRetired": true, "OutgoingWebhook": true, "Pair": true, "Policy": true, "PolicyEvaluated": true, "PolicyOverrideReason": true, "PolicyPublished": true, "PolicyRecord": true, "Record": true, "Report": true, "ReportMetadata": true, "ReportRecord": true, "Result": true, "ResultPolicy": true, "RetiredFilter": true, "RetiredSort": true, "Reverse": true, "Route": true, "Row": true, "Ruleset": true, "SMTPAuth": true, "SPFAuthResult": true, "SPFCheckResult": true, "SPFRecord": true, "SRV": true, "SRVConfCheckResult": true, "STSMX": true, "Sort": true, "SubjectPass": true, "Summary": true, "SuppressAddress": true, "TLSCheckResult": true, "TLSRPTCheckResult": true, "TLSRPTDateRange": true, "TLSRPTRecord": true, "TLSRPTSummary": true, "TLSRPTSuppressAddress": true, "TLSReportRecord": true, "TLSResult": true, "Transport": true, "TransportDirect": true, "TransportSMTP": true, "TransportSocks": true, "URI": true, "WebForward": true, "WebHandler": true, "WebRedirect": true, "WebStatic": true, "WebserverConfig": true }; api.stringsTypes = { "Align": true, "Alignment": true, "CSRFToken": true, "DKIMResult": true, "DMARCPolicy": true, "DMARCResult": true, "Disposition": true, "IP": true, "Localpart": true, "Mode": true, "PolicyOverride": true, "PolicyType": true, "RUA": true, "ResultType": true, "SPFDomainScope": true, "SPFResult": true }; api.intsTypes = {}; api.types = { @@ -371,7 +372,9 @@ var api; "AutoconfCheckResult": { "Name": "AutoconfCheckResult", "Docs": "", "Fields": [{ "Name": "ClientSettingsDomainIPs", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "IPs", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "Errors", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "Warnings", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "Instructions", "Docs": "", "Typewords": ["[]", "string"] }] }, "AutodiscoverCheckResult": { "Name": "AutodiscoverCheckResult", "Docs": "", "Fields": [{ "Name": "Records", "Docs": "", "Typewords": ["[]", "AutodiscoverSRV"] }, { "Name": "Errors", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "Warnings", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "Instructions", "Docs": "", "Typewords": ["[]", "string"] }] }, "AutodiscoverSRV": { "Name": "AutodiscoverSRV", "Docs": "", "Fields": [{ "Name": "Target", "Docs": "", "Typewords": ["string"] }, { "Name": "Port", "Docs": "", "Typewords": ["uint16"] }, { "Name": "Priority", "Docs": "", "Typewords": ["uint16"] }, { "Name": "Weight", "Docs": "", "Typewords": ["uint16"] }, { "Name": "IPs", "Docs": "", "Typewords": ["[]", "string"] }] }, - "Account": { "Name": "Account", "Docs": "", "Fields": [{ "Name": "Domain", "Docs": "", "Typewords": ["string"] }, { "Name": "Description", "Docs": "", "Typewords": ["string"] }, { "Name": "FullName", "Docs": "", "Typewords": ["string"] }, { "Name": "Destinations", "Docs": "", "Typewords": ["{}", "Destination"] }, { "Name": "SubjectPass", "Docs": "", "Typewords": ["SubjectPass"] }, { "Name": "QuotaMessageSize", "Docs": "", "Typewords": ["int64"] }, { "Name": "RejectsMailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "KeepRejects", "Docs": "", "Typewords": ["bool"] }, { "Name": "AutomaticJunkFlags", "Docs": "", "Typewords": ["AutomaticJunkFlags"] }, { "Name": "JunkFilter", "Docs": "", "Typewords": ["nullable", "JunkFilter"] }, { "Name": "MaxOutgoingMessagesPerDay", "Docs": "", "Typewords": ["int32"] }, { "Name": "MaxFirstTimeRecipientsPerDay", "Docs": "", "Typewords": ["int32"] }, { "Name": "NoFirstTimeSenderDelay", "Docs": "", "Typewords": ["bool"] }, { "Name": "Routes", "Docs": "", "Typewords": ["[]", "Route"] }, { "Name": "DNSDomain", "Docs": "", "Typewords": ["Domain"] }] }, + "Account": { "Name": "Account", "Docs": "", "Fields": [{ "Name": "OutgoingWebhook", "Docs": "", "Typewords": ["nullable", "OutgoingWebhook"] }, { "Name": "IncomingWebhook", "Docs": "", "Typewords": ["nullable", "IncomingWebhook"] }, { "Name": "FromIDLoginAddresses", "Docs": "", "Typewords": ["[]", "string"] }, { "Name": "KeepRetiredMessagePeriod", "Docs": "", "Typewords": ["int64"] }, { "Name": "KeepRetiredWebhookPeriod", "Docs": "", "Typewords": ["int64"] }, { "Name": "Domain", "Docs": "", "Typewords": ["string"] }, { "Name": "Description", "Docs": "", "Typewords": ["string"] }, { "Name": "FullName", "Docs": "", "Typewords": ["string"] }, { "Name": "Destinations", "Docs": "", "Typewords": ["{}", "Destination"] }, { "Name": "SubjectPass", "Docs": "", "Typewords": ["SubjectPass"] }, { "Name": "QuotaMessageSize", "Docs": "", "Typewords": ["int64"] }, { "Name": "RejectsMailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "KeepRejects", "Docs": "", "Typewords": ["bool"] }, { "Name": "AutomaticJunkFlags", "Docs": "", "Typewords": ["AutomaticJunkFlags"] }, { "Name": "JunkFilter", "Docs": "", "Typewords": ["nullable", "JunkFilter"] }, { "Name": "MaxOutgoingMessagesPerDay", "Docs": "", "Typewords": ["int32"] }, { "Name": "MaxFirstTimeRecipientsPerDay", "Docs": "", "Typewords": ["int32"] }, { "Name": "NoFirstTimeSenderDelay", "Docs": "", "Typewords": ["bool"] }, { "Name": "Routes", "Docs": "", "Typewords": ["[]", "Route"] }, { "Name": "DNSDomain", "Docs": "", "Typewords": ["Domain"] }] }, + "OutgoingWebhook": { "Name": "OutgoingWebhook", "Docs": "", "Fields": [{ "Name": "URL", "Docs": "", "Typewords": ["string"] }, { "Name": "Authorization", "Docs": "", "Typewords": ["string"] }, { "Name": "Events", "Docs": "", "Typewords": ["[]", "string"] }] }, + "IncomingWebhook": { "Name": "IncomingWebhook", "Docs": "", "Fields": [{ "Name": "URL", "Docs": "", "Typewords": ["string"] }, { "Name": "Authorization", "Docs": "", "Typewords": ["string"] }] }, "Destination": { "Name": "Destination", "Docs": "", "Fields": [{ "Name": "Mailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "Rulesets", "Docs": "", "Typewords": ["[]", "Ruleset"] }, { "Name": "FullName", "Docs": "", "Typewords": ["string"] }] }, "Ruleset": { "Name": "Ruleset", "Docs": "", "Fields": [{ "Name": "SMTPMailFromRegexp", "Docs": "", "Typewords": ["string"] }, { "Name": "VerifiedDomain", "Docs": "", "Typewords": ["string"] }, { "Name": "HeadersRegexp", "Docs": "", "Typewords": ["{}", "string"] }, { "Name": "IsForward", "Docs": "", "Typewords": ["bool"] }, { "Name": "ListAllowDomain", "Docs": "", "Typewords": ["string"] }, { "Name": "AcceptRejectsToMailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "Mailbox", "Docs": "", "Typewords": ["string"] }, { "Name": "VerifiedDNSDomain", "Docs": "", "Typewords": ["Domain"] }, { "Name": "ListAllowDNSDomain", "Docs": "", "Typewords": ["Domain"] }] }, "SubjectPass": { "Name": "SubjectPass", "Docs": "", "Fields": [{ "Name": "Period", "Docs": "", "Typewords": ["int64"] }] }, @@ -404,9 +407,21 @@ var api; "ClientConfigs": { "Name": "ClientConfigs", "Docs": "", "Fields": [{ "Name": "Entries", "Docs": "", "Typewords": ["[]", "ClientConfigsEntry"] }] }, "ClientConfigsEntry": { "Name": "ClientConfigsEntry", "Docs": "", "Fields": [{ "Name": "Protocol", "Docs": "", "Typewords": ["string"] }, { "Name": "Host", "Docs": "", "Typewords": ["Domain"] }, { "Name": "Port", "Docs": "", "Typewords": ["int32"] }, { "Name": "Listener", "Docs": "", "Typewords": ["string"] }, { "Name": "Note", "Docs": "", "Typewords": ["string"] }] }, "HoldRule": { "Name": "HoldRule", "Docs": "", "Fields": [{ "Name": "ID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "SenderDomain", "Docs": "", "Typewords": ["Domain"] }, { "Name": "RecipientDomain", "Docs": "", "Typewords": ["Domain"] }, { "Name": "SenderDomainStr", "Docs": "", "Typewords": ["string"] }, { "Name": "RecipientDomainStr", "Docs": "", "Typewords": ["string"] }] }, - "Filter": { "Name": "Filter", "Docs": "", "Fields": [{ "Name": "IDs", "Docs": "", "Typewords": ["[]", "int64"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "From", "Docs": "", "Typewords": ["string"] }, { "Name": "To", "Docs": "", "Typewords": ["string"] }, { "Name": "Hold", "Docs": "", "Typewords": ["nullable", "bool"] }, { "Name": "Submitted", "Docs": "", "Typewords": ["string"] }, { "Name": "NextAttempt", "Docs": "", "Typewords": ["string"] }, { "Name": "Transport", "Docs": "", "Typewords": ["nullable", "string"] }] }, - "Msg": { "Name": "Msg", "Docs": "", "Fields": [{ "Name": "ID", "Docs": "", "Typewords": ["int64"] }, { "Name": "BaseID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Queued", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "Hold", "Docs": "", "Typewords": ["bool"] }, { "Name": "SenderAccount", "Docs": "", "Typewords": ["string"] }, { "Name": "SenderLocalpart", "Docs": "", "Typewords": ["Localpart"] }, { "Name": "SenderDomain", "Docs": "", "Typewords": ["IPDomain"] }, { "Name": "SenderDomainStr", "Docs": "", "Typewords": ["string"] }, { "Name": "RecipientLocalpart", "Docs": "", "Typewords": ["Localpart"] }, { "Name": "RecipientDomain", "Docs": "", "Typewords": ["IPDomain"] }, { "Name": "RecipientDomainStr", "Docs": "", "Typewords": ["string"] }, { "Name": "Attempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "MaxAttempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "DialedIPs", "Docs": "", "Typewords": ["{}", "[]", "IP"] }, { "Name": "NextAttempt", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "LastAttempt", "Docs": "", "Typewords": ["nullable", "timestamp"] }, { "Name": "LastError", "Docs": "", "Typewords": ["string"] }, { "Name": "Has8bit", "Docs": "", "Typewords": ["bool"] }, { "Name": "SMTPUTF8", "Docs": "", "Typewords": ["bool"] }, { "Name": "IsDMARCReport", "Docs": "", "Typewords": ["bool"] }, { "Name": "IsTLSReport", "Docs": "", "Typewords": ["bool"] }, { "Name": "Size", "Docs": "", "Typewords": ["int64"] }, { "Name": "MessageID", "Docs": "", "Typewords": ["string"] }, { "Name": "MsgPrefix", "Docs": "", "Typewords": ["nullable", "string"] }, { "Name": "DSNUTF8", "Docs": "", "Typewords": ["nullable", "string"] }, { "Name": "Transport", "Docs": "", "Typewords": ["string"] }, { "Name": "RequireTLS", "Docs": "", "Typewords": ["nullable", "bool"] }, { "Name": "FutureReleaseRequest", "Docs": "", "Typewords": ["string"] }] }, + "Filter": { "Name": "Filter", "Docs": "", "Fields": [{ "Name": "Max", "Docs": "", "Typewords": ["int32"] }, { "Name": "IDs", "Docs": "", "Typewords": ["[]", "int64"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "From", "Docs": "", "Typewords": ["string"] }, { "Name": "To", "Docs": "", "Typewords": ["string"] }, { "Name": "Hold", "Docs": "", "Typewords": ["nullable", "bool"] }, { "Name": "Submitted", "Docs": "", "Typewords": ["string"] }, { "Name": "NextAttempt", "Docs": "", "Typewords": ["string"] }, { "Name": "Transport", "Docs": "", "Typewords": ["nullable", "string"] }] }, + "Sort": { "Name": "Sort", "Docs": "", "Fields": [{ "Name": "Field", "Docs": "", "Typewords": ["string"] }, { "Name": "LastID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Last", "Docs": "", "Typewords": ["any"] }, { "Name": "Asc", "Docs": "", "Typewords": ["bool"] }] }, + "Msg": { "Name": "Msg", "Docs": "", "Fields": [{ "Name": "ID", "Docs": "", "Typewords": ["int64"] }, { "Name": "BaseID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Queued", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "Hold", "Docs": "", "Typewords": ["bool"] }, { "Name": "SenderAccount", "Docs": "", "Typewords": ["string"] }, { "Name": "SenderLocalpart", "Docs": "", "Typewords": ["Localpart"] }, { "Name": "SenderDomain", "Docs": "", "Typewords": ["IPDomain"] }, { "Name": "SenderDomainStr", "Docs": "", "Typewords": ["string"] }, { "Name": "FromID", "Docs": "", "Typewords": ["string"] }, { "Name": "RecipientLocalpart", "Docs": "", "Typewords": ["Localpart"] }, { "Name": "RecipientDomain", "Docs": "", "Typewords": ["IPDomain"] }, { "Name": "RecipientDomainStr", "Docs": "", "Typewords": ["string"] }, { "Name": "Attempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "MaxAttempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "DialedIPs", "Docs": "", "Typewords": ["{}", "[]", "IP"] }, { "Name": "NextAttempt", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "LastAttempt", "Docs": "", "Typewords": ["nullable", "timestamp"] }, { "Name": "Results", "Docs": "", "Typewords": ["[]", "MsgResult"] }, { "Name": "Has8bit", "Docs": "", "Typewords": ["bool"] }, { "Name": "SMTPUTF8", "Docs": "", "Typewords": ["bool"] }, { "Name": "IsDMARCReport", "Docs": "", "Typewords": ["bool"] }, { "Name": "IsTLSReport", "Docs": "", "Typewords": ["bool"] }, { "Name": "Size", "Docs": "", "Typewords": ["int64"] }, { "Name": "MessageID", "Docs": "", "Typewords": ["string"] }, { "Name": "MsgPrefix", "Docs": "", "Typewords": ["nullable", "string"] }, { "Name": "Subject", "Docs": "", "Typewords": ["string"] }, { "Name": "DSNUTF8", "Docs": "", "Typewords": ["nullable", "string"] }, { "Name": "Transport", "Docs": "", "Typewords": ["string"] }, { "Name": "RequireTLS", "Docs": "", "Typewords": ["nullable", "bool"] }, { "Name": "FutureReleaseRequest", "Docs": "", "Typewords": ["string"] }, { "Name": "Extra", "Docs": "", "Typewords": ["{}", "string"] }] }, "IPDomain": { "Name": "IPDomain", "Docs": "", "Fields": [{ "Name": "IP", "Docs": "", "Typewords": ["IP"] }, { "Name": "Domain", "Docs": "", "Typewords": ["Domain"] }] }, + "MsgResult": { "Name": "MsgResult", "Docs": "", "Fields": [{ "Name": "Start", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "Duration", "Docs": "", "Typewords": ["int64"] }, { "Name": "Success", "Docs": "", "Typewords": ["bool"] }, { "Name": "Code", "Docs": "", "Typewords": ["int32"] }, { "Name": "Secode", "Docs": "", "Typewords": ["string"] }, { "Name": "Error", "Docs": "", "Typewords": ["string"] }] }, + "RetiredFilter": { "Name": "RetiredFilter", "Docs": "", "Fields": [{ "Name": "Max", "Docs": "", "Typewords": ["int32"] }, { "Name": "IDs", "Docs": "", "Typewords": ["[]", "int64"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "From", "Docs": "", "Typewords": ["string"] }, { "Name": "To", "Docs": "", "Typewords": ["string"] }, { "Name": "Submitted", "Docs": "", "Typewords": ["string"] }, { "Name": "LastActivity", "Docs": "", "Typewords": ["string"] }, { "Name": "Transport", "Docs": "", "Typewords": ["nullable", "string"] }, { "Name": "Success", "Docs": "", "Typewords": ["nullable", "bool"] }] }, + "RetiredSort": { "Name": "RetiredSort", "Docs": "", "Fields": [{ "Name": "Field", "Docs": "", "Typewords": ["string"] }, { "Name": "LastID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Last", "Docs": "", "Typewords": ["any"] }, { "Name": "Asc", "Docs": "", "Typewords": ["bool"] }] }, + "MsgRetired": { "Name": "MsgRetired", "Docs": "", "Fields": [{ "Name": "ID", "Docs": "", "Typewords": ["int64"] }, { "Name": "BaseID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Queued", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "SenderAccount", "Docs": "", "Typewords": ["string"] }, { "Name": "SenderLocalpart", "Docs": "", "Typewords": ["Localpart"] }, { "Name": "SenderDomainStr", "Docs": "", "Typewords": ["string"] }, { "Name": "FromID", "Docs": "", "Typewords": ["string"] }, { "Name": "RecipientLocalpart", "Docs": "", "Typewords": ["Localpart"] }, { "Name": "RecipientDomain", "Docs": "", "Typewords": ["IPDomain"] }, { "Name": "RecipientDomainStr", "Docs": "", "Typewords": ["string"] }, { "Name": "Attempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "MaxAttempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "DialedIPs", "Docs": "", "Typewords": ["{}", "[]", "IP"] }, { "Name": "LastAttempt", "Docs": "", "Typewords": ["nullable", "timestamp"] }, { "Name": "Results", "Docs": "", "Typewords": ["[]", "MsgResult"] }, { "Name": "Has8bit", "Docs": "", "Typewords": ["bool"] }, { "Name": "SMTPUTF8", "Docs": "", "Typewords": ["bool"] }, { "Name": "IsDMARCReport", "Docs": "", "Typewords": ["bool"] }, { "Name": "IsTLSReport", "Docs": "", "Typewords": ["bool"] }, { "Name": "Size", "Docs": "", "Typewords": ["int64"] }, { "Name": "MessageID", "Docs": "", "Typewords": ["string"] }, { "Name": "Subject", "Docs": "", "Typewords": ["string"] }, { "Name": "Transport", "Docs": "", "Typewords": ["string"] }, { "Name": "RequireTLS", "Docs": "", "Typewords": ["nullable", "bool"] }, { "Name": "FutureReleaseRequest", "Docs": "", "Typewords": ["string"] }, { "Name": "Extra", "Docs": "", "Typewords": ["{}", "string"] }, { "Name": "LastActivity", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "RecipientAddress", "Docs": "", "Typewords": ["string"] }, { "Name": "Success", "Docs": "", "Typewords": ["bool"] }, { "Name": "KeepUntil", "Docs": "", "Typewords": ["timestamp"] }] }, + "HookFilter": { "Name": "HookFilter", "Docs": "", "Fields": [{ "Name": "Max", "Docs": "", "Typewords": ["int32"] }, { "Name": "IDs", "Docs": "", "Typewords": ["[]", "int64"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "Submitted", "Docs": "", "Typewords": ["string"] }, { "Name": "NextAttempt", "Docs": "", "Typewords": ["string"] }, { "Name": "Event", "Docs": "", "Typewords": ["string"] }] }, + "HookSort": { "Name": "HookSort", "Docs": "", "Fields": [{ "Name": "Field", "Docs": "", "Typewords": ["string"] }, { "Name": "LastID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Last", "Docs": "", "Typewords": ["any"] }, { "Name": "Asc", "Docs": "", "Typewords": ["bool"] }] }, + "Hook": { "Name": "Hook", "Docs": "", "Fields": [{ "Name": "ID", "Docs": "", "Typewords": ["int64"] }, { "Name": "QueueMsgID", "Docs": "", "Typewords": ["int64"] }, { "Name": "FromID", "Docs": "", "Typewords": ["string"] }, { "Name": "MessageID", "Docs": "", "Typewords": ["string"] }, { "Name": "Subject", "Docs": "", "Typewords": ["string"] }, { "Name": "Extra", "Docs": "", "Typewords": ["{}", "string"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "URL", "Docs": "", "Typewords": ["string"] }, { "Name": "Authorization", "Docs": "", "Typewords": ["string"] }, { "Name": "IsIncoming", "Docs": "", "Typewords": ["bool"] }, { "Name": "OutgoingEvent", "Docs": "", "Typewords": ["string"] }, { "Name": "Payload", "Docs": "", "Typewords": ["string"] }, { "Name": "Submitted", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "Attempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "NextAttempt", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "Results", "Docs": "", "Typewords": ["[]", "HookResult"] }] }, + "HookResult": { "Name": "HookResult", "Docs": "", "Fields": [{ "Name": "Start", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "Duration", "Docs": "", "Typewords": ["int64"] }, { "Name": "URL", "Docs": "", "Typewords": ["string"] }, { "Name": "Success", "Docs": "", "Typewords": ["bool"] }, { "Name": "Code", "Docs": "", "Typewords": ["int32"] }, { "Name": "Error", "Docs": "", "Typewords": ["string"] }, { "Name": "Response", "Docs": "", "Typewords": ["string"] }] }, + "HookRetiredFilter": { "Name": "HookRetiredFilter", "Docs": "", "Fields": [{ "Name": "Max", "Docs": "", "Typewords": ["int32"] }, { "Name": "IDs", "Docs": "", "Typewords": ["[]", "int64"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "Submitted", "Docs": "", "Typewords": ["string"] }, { "Name": "LastActivity", "Docs": "", "Typewords": ["string"] }, { "Name": "Event", "Docs": "", "Typewords": ["string"] }] }, + "HookRetiredSort": { "Name": "HookRetiredSort", "Docs": "", "Fields": [{ "Name": "Field", "Docs": "", "Typewords": ["string"] }, { "Name": "LastID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Last", "Docs": "", "Typewords": ["any"] }, { "Name": "Asc", "Docs": "", "Typewords": ["bool"] }] }, + "HookRetired": { "Name": "HookRetired", "Docs": "", "Fields": [{ "Name": "ID", "Docs": "", "Typewords": ["int64"] }, { "Name": "QueueMsgID", "Docs": "", "Typewords": ["int64"] }, { "Name": "FromID", "Docs": "", "Typewords": ["string"] }, { "Name": "MessageID", "Docs": "", "Typewords": ["string"] }, { "Name": "Subject", "Docs": "", "Typewords": ["string"] }, { "Name": "Extra", "Docs": "", "Typewords": ["{}", "string"] }, { "Name": "Account", "Docs": "", "Typewords": ["string"] }, { "Name": "URL", "Docs": "", "Typewords": ["string"] }, { "Name": "Authorization", "Docs": "", "Typewords": ["bool"] }, { "Name": "IsIncoming", "Docs": "", "Typewords": ["bool"] }, { "Name": "OutgoingEvent", "Docs": "", "Typewords": ["string"] }, { "Name": "Payload", "Docs": "", "Typewords": ["string"] }, { "Name": "Submitted", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "SupersededByID", "Docs": "", "Typewords": ["int64"] }, { "Name": "Attempts", "Docs": "", "Typewords": ["int32"] }, { "Name": "Results", "Docs": "", "Typewords": ["[]", "HookResult"] }, { "Name": "Success", "Docs": "", "Typewords": ["bool"] }, { "Name": "LastActivity", "Docs": "", "Typewords": ["timestamp"] }, { "Name": "KeepUntil", "Docs": "", "Typewords": ["timestamp"] }] }, "WebserverConfig": { "Name": "WebserverConfig", "Docs": "", "Fields": [{ "Name": "WebDNSDomainRedirects", "Docs": "", "Typewords": ["[]", "[]", "Domain"] }, { "Name": "WebDomainRedirects", "Docs": "", "Typewords": ["[]", "[]", "string"] }, { "Name": "WebHandlers", "Docs": "", "Typewords": ["[]", "WebHandler"] }] }, "WebHandler": { "Name": "WebHandler", "Docs": "", "Fields": [{ "Name": "LogName", "Docs": "", "Typewords": ["string"] }, { "Name": "Domain", "Docs": "", "Typewords": ["string"] }, { "Name": "PathRegexp", "Docs": "", "Typewords": ["string"] }, { "Name": "DontRedirectPlainHTTP", "Docs": "", "Typewords": ["bool"] }, { "Name": "Compress", "Docs": "", "Typewords": ["bool"] }, { "Name": "WebStatic", "Docs": "", "Typewords": ["nullable", "WebStatic"] }, { "Name": "WebRedirect", "Docs": "", "Typewords": ["nullable", "WebRedirect"] }, { "Name": "WebForward", "Docs": "", "Typewords": ["nullable", "WebForward"] }, { "Name": "Name", "Docs": "", "Typewords": ["string"] }, { "Name": "DNSDomain", "Docs": "", "Typewords": ["Domain"] }] }, "WebStatic": { "Name": "WebStatic", "Docs": "", "Fields": [{ "Name": "StripPrefix", "Docs": "", "Typewords": ["string"] }, { "Name": "Root", "Docs": "", "Typewords": ["string"] }, { "Name": "ListFiles", "Docs": "", "Typewords": ["bool"] }, { "Name": "ContinueNotFound", "Docs": "", "Typewords": ["bool"] }, { "Name": "ResponseHeaders", "Docs": "", "Typewords": ["{}", "string"] }] }, @@ -472,6 +487,8 @@ var api; AutodiscoverCheckResult: (v) => api.parse("AutodiscoverCheckResult", v), AutodiscoverSRV: (v) => api.parse("AutodiscoverSRV", v), Account: (v) => api.parse("Account", v), + OutgoingWebhook: (v) => api.parse("OutgoingWebhook", v), + IncomingWebhook: (v) => api.parse("IncomingWebhook", v), Destination: (v) => api.parse("Destination", v), Ruleset: (v) => api.parse("Ruleset", v), SubjectPass: (v) => api.parse("SubjectPass", v), @@ -505,8 +522,20 @@ var api; ClientConfigsEntry: (v) => api.parse("ClientConfigsEntry", v), HoldRule: (v) => api.parse("HoldRule", v), Filter: (v) => api.parse("Filter", v), + Sort: (v) => api.parse("Sort", v), Msg: (v) => api.parse("Msg", v), IPDomain: (v) => api.parse("IPDomain", v), + MsgResult: (v) => api.parse("MsgResult", v), + RetiredFilter: (v) => api.parse("RetiredFilter", v), + RetiredSort: (v) => api.parse("RetiredSort", v), + MsgRetired: (v) => api.parse("MsgRetired", v), + HookFilter: (v) => api.parse("HookFilter", v), + HookSort: (v) => api.parse("HookSort", v), + Hook: (v) => api.parse("Hook", v), + HookResult: (v) => api.parse("HookResult", v), + HookRetiredFilter: (v) => api.parse("HookRetiredFilter", v), + HookRetiredSort: (v) => api.parse("HookRetiredSort", v), + HookRetired: (v) => api.parse("HookRetired", v), WebserverConfig: (v) => api.parse("WebserverConfig", v), WebHandler: (v) => api.parse("WebHandler", v), WebStatic: (v) => api.parse("WebStatic", v), @@ -868,11 +897,11 @@ var api; return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); } // QueueList returns the messages currently in the outgoing queue. - async QueueList(filter) { + async QueueList(filter, sort) { const fn = "QueueList"; - const paramTypes = [["Filter"]]; + const paramTypes = [["Filter"], ["Sort"]]; const returnTypes = [["[]", "Msg"]]; - const params = [filter]; + const params = [filter, sort]; return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); } // QueueNextAttemptSet sets a new time for next delivery attempt of matching @@ -935,6 +964,65 @@ var api; const params = [filter, transport]; return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); } + // RetiredList returns messages retired from the queue (delivery could + // have succeeded or failed). + async RetiredList(filter, sort) { + const fn = "RetiredList"; + const paramTypes = [["RetiredFilter"], ["RetiredSort"]]; + const returnTypes = [["[]", "MsgRetired"]]; + const params = [filter, sort]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // HookQueueSize returns the number of webhooks still to be delivered. + async HookQueueSize() { + const fn = "HookQueueSize"; + const paramTypes = []; + const returnTypes = [["int32"]]; + const params = []; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // HookList lists webhooks still to be delivered. + async HookList(filter, sort) { + const fn = "HookList"; + const paramTypes = [["HookFilter"], ["HookSort"]]; + const returnTypes = [["[]", "Hook"]]; + const params = [filter, sort]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // HookNextAttemptSet sets a new time for next delivery attempt of matching + // hooks from the queue. + async HookNextAttemptSet(filter, minutes) { + const fn = "HookNextAttemptSet"; + const paramTypes = [["HookFilter"], ["int32"]]; + const returnTypes = [["int32"]]; + const params = [filter, minutes]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // HookNextAttemptAdd adds a duration to the time of next delivery attempt of + // matching hooks from the queue. + async HookNextAttemptAdd(filter, minutes) { + const fn = "HookNextAttemptAdd"; + const paramTypes = [["HookFilter"], ["int32"]]; + const returnTypes = [["int32"]]; + const params = [filter, minutes]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // HookRetiredList lists retired webhooks. + async HookRetiredList(filter, sort) { + const fn = "HookRetiredList"; + const paramTypes = [["HookRetiredFilter"], ["HookRetiredSort"]]; + const returnTypes = [["[]", "HookRetired"]]; + const params = [filter, sort]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } + // HookCancel prevents further delivery attempts of matching webhooks. + async HookCancel(filter) { + const fn = "HookCancel"; + const paramTypes = [["HookFilter"]]; + const returnTypes = [["int32"]]; + const params = [filter]; + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params); + } // LogLevels returns the current log levels. async LogLevels() { const fn = "LogLevels"; @@ -1516,6 +1604,37 @@ const login = async (reason) => { password.focus(); }); }; +// Popup shows kids in a centered div with white background on top of a +// transparent overlay on top of the window. Clicking the overlay or hitting +// Escape closes the popup. Scrollbars are automatically added to the div with +// kids. Returns a function that removes the popup. +const popup = (...kids) => { + const origFocus = document.activeElement; + const close = () => { + if (!root.parentNode) { + return; + } + root.remove(); + if (origFocus && origFocus instanceof HTMLElement && origFocus.parentNode) { + origFocus.focus(); + } + }; + let content; + const root = dom.div(style({ position: 'fixed', top: 0, right: 0, bottom: 0, left: 0, backgroundColor: 'rgba(0, 0, 0, 0.1)', display: 'flex', alignItems: 'center', justifyContent: 'center', zIndex: '1' }), function keydown(e) { + if (e.key === 'Escape') { + e.stopPropagation(); + close(); + } + }, function click(e) { + e.stopPropagation(); + close(); + }, content = dom.div(attr.tabindex('0'), style({ backgroundColor: 'white', borderRadius: '.25em', padding: '1em', boxShadow: '0 0 20px rgba(0, 0, 0, 0.1)', border: '1px solid #ddd', maxWidth: '95vw', overflowX: 'auto', maxHeight: '95vh', overflowY: 'auto' }), function click(e) { + e.stopPropagation(); + }, kids)); + document.body.appendChild(root); + content.focus(); + return close; +}; const localStorageGet = (k) => { try { return window.localStorage.getItem(k); @@ -1709,9 +1828,10 @@ const formatSize = (n) => { return n + ' bytes'; }; const index = async () => { - const [domains, queueSize, checkUpdatesEnabled, accounts] = await Promise.all([ + const [domains, queueSize, hooksQueueSize, checkUpdatesEnabled, accounts] = await Promise.all([ client.Domains(), client.QueueSize(), + client.HookQueueSize(), client.CheckUpdatesEnabled(), client.Accounts(), ]); @@ -1722,7 +1842,7 @@ const index = async () => { let recvIDFieldset; let recvID; let cidElem; - dom._kids(page, crumbs('Mox Admin'), checkUpdatesEnabled ? [] : dom.p(box(yellow, 'Warning: Checking for updates has not been enabled in mox.conf (CheckUpdates: true).', dom.br(), 'Make sure you stay up to date through another mechanism!', dom.br(), 'You have a responsibility to keep the internet-connected software you run up to date and secure!', dom.br(), 'See ', link('https://updates.xmox.nl/changelog'))), dom.p(dom.a('Accounts', attr.href('#accounts')), dom.br(), dom.a('Queue', attr.href('#queue')), ' (' + queueSize + ')', dom.br()), dom.h2('Domains'), (domains || []).length === 0 ? box(red, 'No domains') : + dom._kids(page, crumbs('Mox Admin'), checkUpdatesEnabled ? [] : dom.p(box(yellow, 'Warning: Checking for updates has not been enabled in mox.conf (CheckUpdates: true).', dom.br(), 'Make sure you stay up to date through another mechanism!', dom.br(), 'You have a responsibility to keep the internet-connected software you run up to date and secure!', dom.br(), 'See ', link('https://updates.xmox.nl/changelog'))), dom.p(dom.a('Accounts', attr.href('#accounts')), dom.br(), dom.a('Queue', attr.href('#queue')), ' (' + queueSize + ')', dom.br(), dom.a('Webhook queue', attr.href('#webhookqueue')), ' (' + hooksQueueSize + ')', dom.br()), dom.h2('Domains'), (domains || []).length === 0 ? box(red, 'No domains') : dom.ul((domains || []).map(d => dom.li(dom.a(attr.href('#domains/' + domainName(d)), domainString(d))))), dom.br(), dom.h2('Add domain'), dom.form(async function submit(e) { e.preventDefault(); e.stopPropagation(); @@ -1830,6 +1950,7 @@ const account = async (name) => { client.Account(name), client.Domains(), ]); + // todo: show suppression list, and buttons to add/remove entries. let form; let fieldset; let localpart; @@ -2096,7 +2217,7 @@ const dmarcEvaluations = async () => { let until; let comment; const nextmonth = new Date(new Date().getTime() + 31 * 24 * 3600 * 1000); - dom._kids(page, crumbs(crumblink('Mox Admin', '#'), crumblink('DMARC', '#dmarc'), 'Evaluations'), dom.p('Incoming messages are checked against the DMARC policy of the domain in the message From header. If the policy requests reporting on the resulting evaluations, they are stored in the database. Each interval of 1 to 24 hours, the evaluations may be sent to a reporting address specified in the domain\'s DMARC policy. Not all evaluations are a reason to send a report, but if a report is sent all evaluations are included.'), dom.table(dom._class('hover'), dom.thead(dom.tr(dom.th('Domain', attr.title('Domain in the message From header. Keep in mind these can be forged, so this does not necessarily mean someone from this domain authentically tried delivering email.')), dom.th('Dispositions', attr.title('Unique dispositions occurring in report.')), dom.th('Evaluations', attr.title('Total number of message delivery attempts, including retries.')), dom.th('Send report', attr.title('Whether the current evaluations will cause a report to be sent.')))), dom.tbody(Object.entries(evalStats).sort((a, b) => a[0] < b[0] ? -1 : 1).map(t => dom.tr(dom.td(dom.a(attr.href('#dmarc/evaluations/' + domainName(t[1].Domain)), domainString(t[1].Domain))), dom.td((t[1].Dispositions || []).join(' ')), dom.td(style({ textAlign: 'right' }), '' + t[1].Count), dom.td(style({ textAlign: 'right' }), t[1].SendReport ? '✓' : ''))), isEmpty(evalStats) ? dom.tr(dom.td(attr.colspan('3'), 'No evaluations.')) : [])), dom.br(), dom.br(), dom.h2('Suppressed reporting addresses'), dom.p('In practice, sending a DMARC report to a reporting address can cause DSN to be sent back. Such addresses can be added to a supression list for a period, to reduce noise in the postmaster mailbox.'), dom.form(async function submit(e) { + dom._kids(page, crumbs(crumblink('Mox Admin', '#'), crumblink('DMARC', '#dmarc'), 'Evaluations'), dom.p('Incoming messages are checked against the DMARC policy of the domain in the message From header. If the policy requests reporting on the resulting evaluations, they are stored in the database. Each interval of 1 to 24 hours, the evaluations may be sent to a reporting address specified in the domain\'s DMARC policy. Not all evaluations are a reason to send a report, but if a report is sent all evaluations are included.'), dom.table(dom._class('hover'), dom.thead(dom.tr(dom.th('Domain', attr.title('Domain in the message From header. Keep in mind these can be forged, so this does not necessarily mean someone from this domain authentically tried delivering email.')), dom.th('Dispositions', attr.title('Unique dispositions occurring in report.')), dom.th('Evaluations', attr.title('Total number of message delivery attempts, including retries.')), dom.th('Send report', attr.title('Whether the current evaluations will cause a report to be sent.')))), dom.tbody(Object.entries(evalStats).sort((a, b) => a[0] < b[0] ? -1 : 1).map(t => dom.tr(dom.td(dom.a(attr.href('#dmarc/evaluations/' + domainName(t[1].Domain)), domainString(t[1].Domain))), dom.td((t[1].Dispositions || []).join(' ')), dom.td(style({ textAlign: 'right' }), '' + t[1].Count), dom.td(style({ textAlign: 'right' }), t[1].SendReport ? '✓' : ''))), isEmpty(evalStats) ? dom.tr(dom.td(attr.colspan('3'), 'No evaluations.')) : [])), dom.br(), dom.br(), dom.h2('Suppressed reporting addresses'), dom.p('In practice, sending a DMARC report to a reporting address can cause DSN to be sent back. Such addresses can be added to a suppression list for a period, to reduce noise in the postmaster mailbox.'), dom.form(async function submit(e) { e.stopPropagation(); e.preventDefault(); await check(fieldset, client.DMARCSuppressAdd(reportingAddress.value, new Date(until.value), comment.value)); @@ -2538,21 +2659,26 @@ const dnsbl = async () => { }, fieldset = dom.fieldset(dom.div('One per line'), dom.div(style({ marginBottom: '.5ex' }), monitorTextarea = dom.textarea(style({ width: '20rem' }), attr.rows('' + Math.max(5, 1 + (monitorZones || []).length)), new String((monitorZones || []).map(zone => domainName(zone)).join('\n'))), dom.div('Examples: sbl.spamhaus.org or bl.spamcop.net')), dom.div(dom.submitbutton('Save'))))); }; const queueList = async () => { - let [holdRules, msgs, transports] = await Promise.all([ + let filter = { Max: parseInt(localStorageGet('adminpaginationsize') || '') || 100, IDs: [], Account: '', From: '', To: '', Hold: null, Submitted: '', NextAttempt: '', Transport: null }; + let sort = { Field: "NextAttempt", LastID: 0, Last: null, Asc: true }; + let [holdRules, msgs0, transports] = await Promise.all([ client.QueueHoldRuleList(), - client.QueueList({ IDs: [], Account: '', From: '', To: '', Hold: null, Submitted: '', NextAttempt: '', Transport: null }), + client.QueueList(filter, sort), client.Transports(), ]); - // todo: sorting by address/timestamps/attempts. + let msgs = msgs0 || []; + // todo: more sorting // todo: after making changes, don't reload entire page. probably best to fetch messages by id and rerender. also report on which messages weren't affected (e.g. no longer in queue). // todo: display which transport will be used for a message according to routing rules (in case none is explicitly configured). // todo: live updates with SSE connections // todo: keep updating times/age. + // todo: reuse this code in webaccount to show users their own message queue, and give (more limited) options to fail/reschedule deliveries. const nowSecs = new Date().getTime() / 1000; let holdRuleAccount; let holdRuleSenderDomain; let holdRuleRecipientDomain; let holdRuleSubmit; + let sortElem; let filterForm; let filterAccount; let filterFrom; @@ -2571,6 +2697,7 @@ const queueList = async () => { // syntax when calling this as parameter in api client calls below. const gatherIDs = () => { const f = { + Max: 0, IDs: Array.from(toggles.entries()).filter(t => t[1].checked).map(t => t[0]), Account: '', From: '', @@ -2586,17 +2713,25 @@ const queueList = async () => { } return f; }; - const tbody = dom.tbody(); + const popupDetails = (m) => { + const nowSecs = new Date().getTime() / 1000; + popup(dom.h1('Details'), dom.table(dom.tr(dom.td('Message subject'), dom.td(m.Subject))), dom.br(), dom.h2('Results'), dom.table(dom.thead(dom.tr(dom.th('Start'), dom.th('Duration'), dom.th('Success'), dom.th('Code'), dom.th('Secode'), dom.th('Error'))), dom.tbody((m.Results || []).length === 0 ? dom.tr(dom.td(attr.colspan('6'), 'No results.')) : [], (m.Results || []).map(r => dom.tr(dom.td(age(r.Start, false, nowSecs)), dom.td(Math.round(r.Duration / 1000000) + 'ms'), dom.td(r.Success ? '✓' : ''), dom.td('' + (r.Code || '')), dom.td(r.Secode), dom.td(r.Error)))))); + }; + let tbody = dom.tbody(); const render = () => { toggles = new Map(); - for (const m of (msgs || [])) { - toggles.set(m.ID, dom.input(attr.type('checkbox'), attr.checked(''))); + for (const m of msgs) { + toggles.set(m.ID, dom.input(attr.type('checkbox'), msgs.length === 1 ? attr.checked('') : [])); } - dom._kids(tbody, (msgs || []).length === 0 ? dom.tr(dom.td(attr.colspan('14'), 'No messages.')) : [], (msgs || []).map(m => { + const ntbody = dom.tbody(dom._class('loadend'), msgs.length === 0 ? dom.tr(dom.td(attr.colspan('15'), 'No messages.')) : [], msgs.map(m => { return dom.tr(dom.td(toggles.get(m.ID)), dom.td('' + m.ID + (m.BaseID > 0 ? '/' + m.BaseID : '')), dom.td(age(new Date(m.Queued), false, nowSecs)), dom.td(m.SenderAccount || '-'), dom.td(m.SenderLocalpart + "@" + ipdomainString(m.SenderDomain)), // todo: escaping of localpart dom.td(m.RecipientLocalpart + "@" + ipdomainString(m.RecipientDomain)), // todo: escaping of localpart - dom.td(formatSize(m.Size)), dom.td('' + m.Attempts), dom.td(m.Hold ? 'Hold' : ''), dom.td(age(new Date(m.NextAttempt), true, nowSecs)), dom.td(m.LastAttempt ? age(new Date(m.LastAttempt), false, nowSecs) : '-'), dom.td(m.LastError || '-'), dom.td(m.RequireTLS === true ? 'Yes' : (m.RequireTLS === false ? 'No' : 'Default')), dom.td(m.Transport || '(default)')); + dom.td(formatSize(m.Size)), dom.td('' + m.Attempts), dom.td(m.Hold ? 'Hold' : ''), dom.td(age(new Date(m.NextAttempt), true, nowSecs)), dom.td(m.LastAttempt ? age(new Date(m.LastAttempt), false, nowSecs) : '-'), dom.td(m.Results && m.Results.length > 0 ? m.Results[m.Results.length - 1].Error : []), dom.td(m.Transport || '(default)'), dom.td(m.RequireTLS === true ? 'Yes' : (m.RequireTLS === false ? 'No' : '')), dom.td(dom.clickbutton('Details', function click() { + popupDetails(m); + }))); })); + tbody.replaceWith(ntbody); + tbody = ntbody; }; render(); const buttonNextAttemptSet = (text, minutes) => dom.clickbutton(text, async function click(e) { @@ -2610,7 +2745,7 @@ const queueList = async () => { window.alert('' + n + ' message(s) updated'); window.location.reload(); // todo: reload less }); - dom._kids(page, crumbs(crumblink('Mox Admin', '#'), 'Queue'), dom.h2('Hold rules', attr.title('Messages submitted to the queue that match a hold rule are automatically marked as "on hold", preventing delivery until explicitly taken off hold again.')), dom.form(attr.id('holdRuleForm'), async function submit(e) { + dom._kids(page, crumbs(crumblink('Mox Admin', '#'), 'Queue'), dom.p(dom.a(attr.href('#queue/retired'), 'Retired messages')), dom.h2('Hold rules', attr.title('Messages submitted to the queue that match a hold rule are automatically marked as "on hold", preventing delivery until explicitly taken off hold again.')), dom.form(attr.id('holdRuleForm'), async function submit(e) { e.preventDefault(); e.stopPropagation(); const pr = { @@ -2654,7 +2789,8 @@ const queueList = async () => { async function submit(e) { e.preventDefault(); e.stopPropagation(); - const filter = { + filter = { + Max: filter.Max, IDs: [], Account: filterAccount.value, From: filterFrom.value, @@ -2664,24 +2800,54 @@ const queueList = async () => { NextAttempt: filterNextAttempt.value, Transport: !filterTransport.value ? null : (filterTransport.value === '(default)' ? '' : filterTransport.value), }; - dom._kids(tbody); - msgs = await check({ disabled: false }, client.QueueList(filter)); + sort = { + Field: sortElem.value.startsWith('nextattempt') ? 'NextAttempt' : 'Queued', + LastID: 0, + Last: null, + Asc: sortElem.value.endsWith('asc'), + }; + tbody.classList.add('loadstart'); + msgs = await check({ disabled: false }, client.QueueList(filter, sort)) || []; render(); - }), dom.h2('Messages'), dom.table(dom._class('hover'), dom.thead(dom.tr(dom.th(), dom.th('ID'), dom.th('Submitted'), dom.th('Account'), dom.th('From'), dom.th('To'), dom.th('Size'), dom.th('Attempts'), dom.th('Hold'), dom.th('Next attempt'), dom.th('Last attempt'), dom.th('Last error'), dom.th('Require TLS'), dom.th('Transport'), dom.th()), dom.tr(dom.td(dom.input(attr.type('checkbox'), attr.checked(''), attr.form('queuefilter'), function change(e) { + }), dom.h2('Messages'), dom.table(dom._class('hover'), style({ width: '100%' }), dom.thead(dom.tr(dom.td(attr.colspan('2'), 'Filter'), dom.td(filterSubmitted = dom.input(attr.form('queuefilter'), style({ width: '7em' }), attr.title('Example: "<-1h" for filtering messages submitted more than 1 hour ago.'))), dom.td(filterAccount = dom.input(attr.form('queuefilter'))), dom.td(filterFrom = dom.input(attr.form('queuefilter')), attr.title('Example: "@sender.example" to filter by domain of sender.')), dom.td(filterTo = dom.input(attr.form('queuefilter')), attr.title('Example: "@recipient.example" to filter by domain of recipient.')), dom.td(), // todo: add filter by size? + dom.td(), // todo: add filter by attempts? + dom.td(filterHold = dom.select(attr.form('queuefilter'), function change() { + filterForm.requestSubmit(); + }, dom.option('', attr.value('')), dom.option('Yes'), dom.option('No'))), dom.td(filterNextAttempt = dom.input(attr.form('queuefilter'), style({ width: '7em' }), attr.title('Example: ">1h" for filtering messages to be delivered in more than 1 hour, or " dom.option(t)))), dom.td(attr.colspan('2'), style({ textAlign: 'right' }), // Less content shifting while rendering. + 'Sort ', sortElem = dom.select(attr.form('queuefilter'), function change() { + filterForm.requestSubmit(); + }, dom.option('Next attempt ↑', attr.value('nextattempt-asc')), dom.option('Next attempt ↓', attr.value('nextattempt-desc')), dom.option('Submitted ↑', attr.value('submitted-asc')), dom.option('Submitted ↓', attr.value('submitted-desc'))), ' ', dom.submitbutton('Apply', attr.form('queuefilter')), ' ', dom.clickbutton('Reset', attr.form('queuefilter'), function click() { + filterForm.reset(); + filterForm.requestSubmit(); + }))), dom.tr(dom.td(dom.input(attr.type('checkbox'), msgs.length === 1 ? attr.checked('') : [], attr.form('queuefilter'), function change(e) { const elem = e.target; for (const [_, toggle] of toggles) { toggle.checked = elem.checked; } - })), dom.td(), dom.td(filterSubmitted = dom.input(attr.form('queuefilter'), style({ width: '7em' }), attr.title('Example: "<1h" for filtering messages submitted more than 1 minute ago.'))), dom.td(filterAccount = dom.input(attr.form('queuefilter'))), dom.td(filterFrom = dom.input(attr.form('queuefilter')), attr.title('Example: "@sender.example" to filter by domain of sender.')), dom.td(filterTo = dom.input(attr.form('queuefilter')), attr.title('Example: "@recipient.example" to filter by domain of recipient.')), dom.td(), // todo: add filter by size? - dom.td(), // todo: add filter by attempts? - dom.td(filterHold = dom.select(attr.form('queuefilter'), dom.option('', attr.value('')), dom.option('Yes'), dom.option('No'), function change() { - filterForm.requestSubmit(); - })), dom.td(filterNextAttempt = dom.input(attr.form('queuefilter'), style({ width: '7em' }), attr.title('Example: ">1h" for filtering messages to be delivered in more than 1 hour, or " dom.option(t)))), dom.td(dom.submitbutton('Filter', attr.form('queuefilter')), ' ', dom.clickbutton('Reset', attr.form('queuefilter'), function click() { - filterForm.reset(); - filterForm.requestSubmit(); - })))), tbody), dom.br(), dom.br(), dom.h2('Change selected messages'), dom.div(style({ display: 'flex', gap: '2em' }), dom.div(dom.div('Hold'), dom.div(dom.clickbutton('On', async function click(e) { + })), dom.th('ID'), dom.th('Submitted'), dom.th('Account'), dom.th('From'), dom.th('To'), dom.th('Size'), dom.th('Attempts'), dom.th('Hold'), dom.th('Next attempt'), dom.th('Last attempt'), dom.th('Last error'), dom.th('Transport'), dom.th('Require TLS'), dom.th('Actions'))), tbody, dom.tfoot(dom.tr(dom.td(attr.colspan('15'), + // todo: consider implementing infinite scroll, autoloading more pages. means the operations on selected messages should be moved from below to above the table. and probably only show them when at least one message is selected to prevent clutter. + dom.clickbutton('Load more', attr.title('Try to load more entries. You can still try to load more entries when at the end of the list, new entries may have been appended since the previous call.'), async function click(e) { + if (msgs.length === 0) { + sort.LastID = 0; + sort.Last = null; + } + else { + const lm = msgs[msgs.length - 1]; + sort.LastID = lm.ID; + if (sort.Field === "Queued") { + sort.Last = lm.Queued; + } + else { + sort.Last = lm.NextAttempt; + } + } + tbody.classList.add('loadstart'); + const l = await check(e.target, client.QueueList(filter, sort)) || []; + msgs.push(...l); + render(); + }))))), dom.br(), dom.br(), dom.div(dom._class('unclutter'), dom.h2('Change selected messages'), dom.div(style({ display: 'flex', gap: '2em' }), dom.div(dom.div('Hold'), dom.div(dom.clickbutton('On', async function click(e) { const n = await check(e.target, (async () => await client.QueueHoldSet(gatherIDs(), true))()); window.alert('' + n + ' message(s) updated'); window.location.reload(); // todo: reload less @@ -2705,7 +2871,7 @@ const queueList = async () => { window.location.reload(); // todo: only refresh the list })), dom.div(dom.div('Delivery'), dom.clickbutton('Fail delivery', attr.title('Cause delivery to fail, sending a DSN to the sender.'), async function click(e) { e.preventDefault(); - if (!window.confirm('Are you sure you want to remove this message? Notifications of delivery failure will be sent (DSNs).')) { + if (!window.confirm('Are you sure you want to fail delivery for the selected message(s)? Notifications of delivery failure will be sent (DSNs).')) { return; } const n = await check(e.target, (async () => await client.QueueFail(gatherIDs()))()); @@ -2713,13 +2879,320 @@ const queueList = async () => { window.location.reload(); // todo: only refresh the list })), dom.div(dom.div('Messages'), dom.clickbutton('Remove', attr.title('Completely remove messages from queue, not sending a DSN.'), async function click(e) { e.preventDefault(); - if (!window.confirm('Are you sure you want to remove this message? It will be removed completely, no DSN about failure to deliver will be sent.')) { + if (!window.confirm('Are you sure you want to fail delivery for the selected message(s)? It will be removed completely, no DSN about failure to deliver will be sent.')) { return; } const n = await check(e.target, (async () => await client.QueueDrop(gatherIDs()))()); window.alert('' + n + ' message(s) updated'); window.location.reload(); // todo: only refresh the list - })))); + }))))); +}; +const retiredList = async () => { + let filter = { Max: parseInt(localStorageGet('adminpaginationsize') || '') || 100, IDs: [], Account: '', From: '', To: '', Submitted: '', LastActivity: '', Transport: null }; + let sort = { Field: "LastActivity", LastID: 0, Last: null, Asc: false }; + const [retired0, transports0] = await Promise.all([ + client.RetiredList(filter, sort), + client.Transports(), + ]); + let retired = retired0 || []; + let transports = transports0 || {}; + const nowSecs = new Date().getTime() / 1000; + let sortElem; + let filterForm; + let filterAccount; + let filterFrom; + let filterTo; + let filterSubmitted; + let filterLastActivity; + let filterTransport; + let filterSuccess; + const popupDetails = (m) => { + const nowSecs = new Date().getTime() / 1000; + popup(dom.h1('Details'), dom.table(dom.tr(dom.td('Message subject'), dom.td(m.Subject))), dom.br(), dom.h2('Results'), dom.table(dom.thead(dom.tr(dom.th('Start'), dom.th('Duration'), dom.th('Success'), dom.th('Code'), dom.th('Secode'), dom.th('Error'))), dom.tbody((m.Results || []).length === 0 ? dom.tr(dom.td(attr.colspan('6'), 'No results.')) : [], (m.Results || []).map(r => dom.tr(dom.td(age(r.Start, false, nowSecs)), dom.td(Math.round(r.Duration / 1000000) + 'ms'), dom.td(r.Success ? '✓' : ''), dom.td('' + (r.Code || '')), dom.td(r.Secode), dom.td(r.Error)))))); + }; + let tbody = dom.tbody(); + const render = () => { + const ntbody = dom.tbody(dom._class('loadend'), retired.length === 0 ? dom.tr(dom.td(attr.colspan('14'), 'No retired messages.')) : [], retired.map(m => dom.tr(dom.td('' + m.ID + (m.BaseID > 0 ? '/' + m.BaseID : '')), dom.td(m.Success ? '✓' : ''), dom.td(age(new Date(m.LastActivity), false, nowSecs)), dom.td(age(new Date(m.Queued), false, nowSecs)), dom.td(m.SenderAccount || '-'), dom.td(m.SenderLocalpart + "@" + m.SenderDomainStr), // todo: escaping of localpart + dom.td(m.RecipientLocalpart + "@" + m.RecipientDomainStr), // todo: escaping of localpart + dom.td(formatSize(m.Size)), dom.td('' + m.Attempts), dom.td(m.LastAttempt ? age(new Date(m.LastAttempt), false, nowSecs) : '-'), dom.td(m.Results && m.Results.length > 0 ? m.Results[m.Results.length - 1].Error : []), dom.td(m.Transport || ''), dom.td(m.RequireTLS === true ? 'Yes' : (m.RequireTLS === false ? 'No' : '')), dom.td(dom.clickbutton('Details', function click() { + popupDetails(m); + }))))); + tbody.replaceWith(ntbody); + tbody = ntbody; + }; + render(); + dom._kids(page, crumbs(crumblink('Mox Admin', '#'), crumblink('Queue', '#queue'), 'Retired messages'), + // Filtering. + filterForm = dom.form(attr.id('queuefilter'), // Referenced by input elements in table row. + async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + filter = { + Max: filter.Max, + IDs: [], + Account: filterAccount.value, + From: filterFrom.value, + To: filterTo.value, + Submitted: filterSubmitted.value, + LastActivity: filterLastActivity.value, + Transport: !filterTransport.value ? null : (filterTransport.value === '(default)' ? '' : filterTransport.value), + Success: filterSuccess.value === '' ? null : (filterSuccess.value === 'Yes' ? true : false), + }; + sort = { + Field: sortElem.value.startsWith('lastactivity') ? 'LastActivity' : 'Queued', + LastID: 0, + Last: null, + Asc: sortElem.value.endsWith('asc'), + }; + tbody.classList.add('loadstart'); + retired = await check({ disabled: false }, client.RetiredList(filter, sort)) || []; + render(); + }), dom.h2('Retired messages'), dom.p('Meta information about queued messages may be kept after successful and/or failed delivery, configurable per account.'), dom.table(dom._class('hover'), style({ width: '100%' }), dom.thead(dom.tr(dom.td('Filter'), dom.td(filterSuccess = dom.select(attr.form('queuefilter'), function change() { + filterForm.requestSubmit(); + }, dom.option(''), dom.option('Yes'), dom.option('No'))), dom.td(filterLastActivity = dom.input(attr.form('queuefilter'), style({ width: '7em' }), attr.title('Example: ">-1h" for filtering messages with last activity less than 1 hour ago.'))), dom.td(filterSubmitted = dom.input(attr.form('queuefilter'), style({ width: '7em' }), attr.title('Example: "<-1h" for filtering messages submitted more than 1 hour ago.'))), dom.td(filterAccount = dom.input(attr.form('queuefilter'))), dom.td(filterFrom = dom.input(attr.form('queuefilter')), attr.title('Example: "@sender.example" to filter by domain of sender.')), dom.td(filterTo = dom.input(attr.form('queuefilter')), attr.title('Example: "@recipient.example" to filter by domain of recipient.')), dom.td(), // todo: add filter by size? + dom.td(), // todo: add filter by attempts? + dom.td(), dom.td(), dom.td(filterTransport = dom.select(Object.keys(transports).length === 0 ? style({ display: 'none' }) : [], attr.form('queuefilter'), function change() { + filterForm.requestSubmit(); + }, dom.option(''), dom.option('(default)'), Object.keys(transports).sort().map(t => dom.option(t)))), dom.td(attr.colspan('2'), style({ textAlign: 'right' }), // Less content shifting while rendering. + 'Sort ', sortElem = dom.select(attr.form('queuefilter'), function change() { + filterForm.requestSubmit(); + }, dom.option('Last activity ↓', attr.value('lastactivity-desc')), dom.option('Last activity ↑', attr.value('lastactivity-asc')), dom.option('Submitted ↓', attr.value('submitted-desc')), dom.option('Submitted ↑', attr.value('submitted-asc'))), ' ', dom.submitbutton('Apply', attr.form('queuefilter')), ' ', dom.clickbutton('Reset', attr.form('queuefilter'), function click() { + filterForm.reset(); + filterForm.requestSubmit(); + }))), dom.tr(dom.th('ID'), dom.th('Success'), dom.th('Last activity'), dom.th('Submitted'), dom.th('Account'), dom.th('From'), dom.th('To'), dom.th('Size'), dom.th('Attempts'), dom.th('Last attempt'), dom.th('Last error'), dom.th('Require TLS'), dom.th('Transport'), dom.th('Actions'))), tbody, dom.tfoot(dom.tr(dom.td(attr.colspan('14'), dom.clickbutton('Load more', attr.title('Try to load more entries. You can still try to load more entries when at the end of the list, new entries may have been appended since the previous call.'), async function click(e) { + if (retired.length === 0) { + sort.LastID = 0; + sort.Last = null; + } + else { + const lm = retired[retired.length - 1]; + sort.LastID = lm.ID; + if (sort.Field === "Queued") { + sort.Last = lm.Queued; + } + else { + sort.Last = lm.LastActivity; + } + } + tbody.classList.add('loadstart'); + const l = await check(e.target, client.RetiredList(filter, sort)) || []; + retired.push(...l); + render(); + })))))); +}; +const formatExtra = (extra) => { + if (!extra) { + return ''; + } + return Object.entries(extra).sort((a, b) => a[0] < b[0] ? -1 : 1).map(t => t[0] + ': ' + t[1]).join('; '); +}; +const hooksList = async () => { + let filter = { Max: parseInt(localStorageGet('adminpaginationsize') || '') || 100, IDs: [], Account: '', Submitted: '', NextAttempt: '', Event: '' }; + let sort = { Field: "NextAttempt", LastID: 0, Last: null, Asc: true }; + let hooks = await client.HookList(filter, sort) || []; + const nowSecs = new Date().getTime() / 1000; + let sortElem; + let filterForm; + let filterSubmitted; + let filterAccount; + let filterEvent; + let filterNextAttempt; + // Hook ID to checkbox. + let toggles = new Map(); + // We operate on what the user has selected, not what the filters would currently + // evaluate to. This function can throw an error, which is why we have awkward + // syntax when calling this as parameter in api client calls below. + const gatherIDs = () => { + const f = { + Max: 0, + IDs: Array.from(toggles.entries()).filter(t => t[1].checked).map(t => t[0]), + Account: '', + Event: '', + Submitted: '', + NextAttempt: '', + }; + // Don't want to accidentally operate on all messages. + if ((f.IDs || []).length === 0) { + throw new Error('No hooks selected.'); + } + return f; + }; + const popupDetails = (h) => { + const nowSecs = new Date().getTime() / 1000; + popup(dom.h1('Details'), dom.div(dom._class('twocols'), dom.div(dom.table(dom.tr(dom.td('Message subject'), dom.td(h.Subject))), dom.br(), dom.h2('Results'), dom.table(dom.thead(dom.tr(dom.th('Start'), dom.th('Duration'), dom.th('Success'), dom.th('Code'), dom.th('Error'), dom.th('URL'), dom.th('Response'))), dom.tbody((h.Results || []).length === 0 ? dom.tr(dom.td(attr.colspan('7'), 'No results.')) : [], (h.Results || []).map(r => dom.tr(dom.td(age(r.Start, false, nowSecs)), dom.td(Math.round(r.Duration / 1000000) + 'ms'), dom.td(r.Success ? '✓' : ''), dom.td('' + (r.Code || '')), dom.td(r.Error), dom.td(r.URL), dom.td(r.Response))))), dom.br()), dom.div(dom.h2('Webhook JSON body'), dom.pre(dom._class('literal'), JSON.stringify(JSON.parse(h.Payload), undefined, '\t'))))); + }; + let tbody = dom.tbody(); + const render = () => { + toggles = new Map(); + for (const h of (hooks || [])) { + toggles.set(h.ID, dom.input(attr.type('checkbox'), (hooks || []).length === 1 ? attr.checked('') : [])); + } + const ntbody = dom.tbody(dom._class('loadend'), hooks.length === 0 ? dom.tr(dom.td(attr.colspan('15'), 'No webhooks.')) : [], hooks.map(h => dom.tr(dom.td(toggles.get(h.ID)), dom.td('' + h.ID), dom.td(age(new Date(h.Submitted), false, nowSecs)), dom.td('' + (h.QueueMsgID || '')), // todo future: make it easy to open the corresponding (retired) message from queue (if still around). + dom.td('' + h.FromID), dom.td('' + h.MessageID), dom.td(h.Account || '-'), dom.td(h.IsIncoming ? "incoming" : h.OutgoingEvent), dom.td(formatExtra(h.Extra)), dom.td('' + h.Attempts), dom.td(age(h.NextAttempt, true, nowSecs)), dom.td(h.Results && h.Results.length > 0 ? age(h.Results[h.Results.length - 1].Start, false, nowSecs) : []), dom.td(h.Results && h.Results.length > 0 ? h.Results[h.Results.length - 1].Error : []), dom.td(h.URL), dom.td(dom.clickbutton('Details', function click() { + popupDetails(h); + }))))); + tbody.replaceWith(ntbody); + tbody = ntbody; + }; + render(); + const buttonNextAttemptSet = (text, minutes) => dom.clickbutton(text, async function click(e) { + // note: awkward client call because gatherIDs() can throw an exception. + const n = await check(e.target, (async () => client.HookNextAttemptSet(gatherIDs(), minutes))()); + window.alert('' + n + ' hook(s) updated'); + window.location.reload(); // todo: reload less + }); + const buttonNextAttemptAdd = (text, minutes) => dom.clickbutton(text, async function click(e) { + const n = await check(e.target, (async () => client.HookNextAttemptAdd(gatherIDs(), minutes))()); + window.alert('' + n + ' hook(s) updated'); + window.location.reload(); // todo: reload less + }); + dom._kids(page, crumbs(crumblink('Mox Admin', '#'), 'Webhook queue'), dom.p(dom.a(attr.href('#webhookqueue/retired'), 'Retired webhooks')), dom.h2('Webhooks'), dom.table(dom._class('hover'), style({ width: '100%' }), dom.thead(dom.tr(dom.td(attr.colspan('2'), 'Filter'), dom.td(filterSubmitted = dom.input(attr.form('hooksfilter'), style({ width: '7em' }), attr.title('Example: "<-1h" for filtering webhooks submitted more than 1 hour ago.'))), dom.td(), dom.td(), dom.td(), dom.td(filterAccount = dom.input(attr.form('hooksfilter'), style({ width: '8em' }))), dom.td(filterEvent = dom.select(attr.form('hooksfilter'), function change() { + filterForm.requestSubmit(); + }, dom.option(''), + // note: outgoing hook events are in ../webhook/webhook.go, ../mox-/config.go ../webadmin/admin.ts and ../webapi/gendoc.sh. keep in sync. + ['incoming', 'delivered', 'suppressed', 'delayed', 'failed', 'relayed', 'expanded', 'canceled', 'unrecognized'].map(s => dom.option(s)))), dom.td(), dom.td(), dom.td(filterNextAttempt = dom.input(attr.form('hooksfilter'), style({ width: '7em' }), attr.title('Example: ">1h" for filtering webhooks to be delivered in more than 1 hour, or " await client.HookCancel(gatherIDs()))()); + window.alert('' + n + ' webhook(s) updated'); + window.location.reload(); // todo: only refresh the list + }))))); +}; +const hooksRetiredList = async () => { + let filter = { Max: parseInt(localStorageGet('adminpaginationsize') || '') || 100, IDs: [], Account: '', Submitted: '', LastActivity: '', Event: '' }; + let sort = { Field: "LastActivity", LastID: 0, Last: null, Asc: false }; + let hooks = await client.HookRetiredList(filter, sort) || []; + const nowSecs = new Date().getTime() / 1000; + let sortElem; + let filterForm; + let filterSubmitted; + let filterAccount; + let filterEvent; + let filterLastActivity; + const popupDetails = (h) => { + const nowSecs = new Date().getTime() / 1000; + popup(dom.h1('Details'), dom.div(dom._class('twocols'), dom.div(dom.table(dom.tr(dom.td('Message subject'), dom.td(h.Subject)), h.SupersededByID != 0 ? dom.tr(dom.td('Superseded by webhook ID'), dom.td('' + h.SupersededByID)) : []), dom.br(), dom.h2('Results'), dom.table(dom.thead(dom.tr(dom.th('Start'), dom.th('Duration'), dom.th('Success'), dom.th('Code'), dom.th('Error'), dom.th('URL'), dom.th('Response'))), dom.tbody((h.Results || []).length === 0 ? dom.tr(dom.td(attr.colspan('7'), 'No results.')) : [], (h.Results || []).map(r => dom.tr(dom.td(age(r.Start, false, nowSecs)), dom.td(Math.round(r.Duration / 1000000) + 'ms'), dom.td(r.Success ? '✓' : ''), dom.td('' + (r.Code || '')), dom.td(r.Error), dom.td(r.URL), dom.td(r.Response))))), dom.br()), dom.div(dom.h2('Webhook JSON body'), dom.pre(dom._class('literal'), JSON.stringify(JSON.parse(h.Payload), undefined, '\t'))))); + }; + let tbody = dom.tbody(); + // todo future: add selection + button to reschedule old retired webhooks. + const render = () => { + const ntbody = dom.tbody(dom._class('loadend'), hooks.length === 0 ? dom.tr(dom.td(attr.colspan('14'), 'No retired webhooks.')) : [], hooks.map(h => dom.tr(dom.td('' + h.ID), dom.td(h.Success ? '✓' : ''), dom.td(age(h.LastActivity, false, nowSecs)), dom.td(age(new Date(h.Submitted), false, nowSecs)), dom.td('' + (h.QueueMsgID || '')), dom.td('' + h.FromID), dom.td('' + h.MessageID), dom.td(h.Account || '-'), dom.td(h.IsIncoming ? "incoming" : h.OutgoingEvent), dom.td(formatExtra(h.Extra)), dom.td('' + h.Attempts), dom.td(h.Results && h.Results.length > 0 ? h.Results[h.Results.length - 1].Error : []), dom.td(h.URL), dom.td(dom.clickbutton('Details', function click() { + popupDetails(h); + }))))); + tbody.replaceWith(ntbody); + tbody = ntbody; + }; + render(); + dom._kids(page, crumbs(crumblink('Mox Admin', '#'), crumblink('Webhook queue', '#webhookqueue'), 'Retired webhooks'), dom.h2('Retired webhooks'), dom.table(dom._class('hover'), style({ width: '100%' }), dom.thead(dom.tr(dom.td('Filter'), dom.td(), dom.td(filterLastActivity = dom.input(attr.form('hooksfilter'), style({ width: '7em' }), attr.title('Example: ">-1h" for filtering last activity for webhooks more than 1 hour ago.'))), dom.td(filterSubmitted = dom.input(attr.form('hooksfilter'), style({ width: '7em' }), attr.title('Example: "<-1h" for filtering webhooks submitted more than 1 hour ago.'))), dom.td(), dom.td(), dom.td(), dom.td(filterAccount = dom.input(attr.form('hooksfilter'), style({ width: '8em' }))), dom.td(filterEvent = dom.select(attr.form('hooksfilter'), function change() { + filterForm.requestSubmit(); + }, dom.option(''), + // note: outgoing hook events are in ../webhook/webhook.go, ../mox-/config.go ../webadmin/admin.ts and ../webapi/gendoc.sh. keep in sync. + ['incoming', 'delivered', 'suppressed', 'delayed', 'failed', 'relayed', 'expanded', 'canceled', 'unrecognized'].map(s => dom.option(s)))), dom.td(), dom.td(), dom.td(), dom.td(attr.colspan('2'), style({ textAlign: 'right' }), // Less content shifting while rendering. + 'Sort ', sortElem = dom.select(attr.form('hooksfilter'), function change() { + filterForm.requestSubmit(); + }, dom.option('Last activity ↓', attr.value('nextattempt-desc')), dom.option('Last activity ↑', attr.value('nextattempt-asc')), dom.option('Submitted ↓', attr.value('submitted-desc')), dom.option('Submitted ↑', attr.value('submitted-asc'))), ' ', dom.submitbutton('Apply', attr.form('hooksfilter')), ' ', dom.clickbutton('Reset', attr.form('hooksfilter'), function click() { + filterForm.reset(); + filterForm.requestSubmit(); + }))), dom.tr(dom.th('ID'), dom.th('Success'), dom.th('Last'), dom.th('Submitted'), dom.th('Queue Msg ID', attr.title('ID of queued message this event is about.')), dom.th('FromID'), dom.th('MessageID'), dom.th('Account'), dom.th('Event'), dom.th('Extra'), dom.th('Attempts'), dom.th('Error'), dom.th('URL'), dom.th('Actions'))), tbody, dom.tfoot(dom.tr(dom.td(attr.colspan('14'), dom.clickbutton('Load more', attr.title('Try to load more entries. You can still try to load more entries when at the end of the list, new entries may have been appended since the previous call.'), async function click(e) { + if (hooks.length === 0) { + sort.LastID = 0; + sort.Last = null; + } + else { + const last = hooks[hooks.length - 1]; + sort.LastID = last.ID; + if (sort.Field === "Submitted") { + sort.Last = last.Submitted; + } + else { + sort.Last = last.LastActivity; + } + } + tbody.classList.add('loadstart'); + const l = await check(e.target, client.HookRetiredList(filter, sort)) || []; + hooks.push(...l); + render(); + }))))), + // Filtering. + filterForm = dom.form(attr.id('hooksfilter'), // Referenced by input elements in table row. + async function submit(e) { + e.preventDefault(); + e.stopPropagation(); + filter = { + Max: filter.Max, + IDs: [], + Account: filterAccount.value, + Event: filterEvent.value, + Submitted: filterSubmitted.value, + LastActivity: filterLastActivity.value, + }; + sort = { + Field: sortElem.value.startsWith('lastactivity') ? 'LastActivity' : 'Submitted', + LastID: 0, + Last: null, + Asc: sortElem.value.endsWith('asc'), + }; + tbody.classList.add('loadstart'); + hooks = await check({ disabled: false }, client.HookRetiredList(filter, sort)) || []; + render(); + })); }; const webserver = async () => { let conf = await client.WebserverConfig(); @@ -3072,6 +3545,15 @@ const init = async () => { else if (h === 'queue') { await queueList(); } + else if (h === 'queue/retired') { + await retiredList(); + } + else if (h === 'webhookqueue') { + await hooksList(); + } + else if (h === 'webhookqueue/retired') { + await hooksRetiredList(); + } else if (h === 'tlsrpt') { await tlsrptIndex(); } diff --git a/webadmin/admin.ts b/webadmin/admin.ts index b385ad1..4b35b80 100644 --- a/webadmin/admin.ts +++ b/webadmin/admin.ts @@ -68,6 +68,48 @@ const login = async (reason: string) => { }) } +// Popup shows kids in a centered div with white background on top of a +// transparent overlay on top of the window. Clicking the overlay or hitting +// Escape closes the popup. Scrollbars are automatically added to the div with +// kids. Returns a function that removes the popup. +const popup = (...kids: ElemArg[]) => { + const origFocus = document.activeElement + const close = () => { + if (!root.parentNode) { + return + } + root.remove() + if (origFocus && origFocus instanceof HTMLElement && origFocus.parentNode) { + origFocus.focus() + } + } + let content: HTMLElement + const root = dom.div( + style({position: 'fixed', top: 0, right: 0, bottom: 0, left: 0, backgroundColor: 'rgba(0, 0, 0, 0.1)', display: 'flex', alignItems: 'center', justifyContent: 'center', zIndex: '1'}), + function keydown(e: KeyboardEvent) { + if (e.key === 'Escape') { + e.stopPropagation() + close() + } + }, + function click(e: MouseEvent) { + e.stopPropagation() + close() + }, + content=dom.div( + attr.tabindex('0'), + style({backgroundColor: 'white', borderRadius: '.25em', padding: '1em', boxShadow: '0 0 20px rgba(0, 0, 0, 0.1)', border: '1px solid #ddd', maxWidth: '95vw', overflowX: 'auto', maxHeight: '95vh', overflowY: 'auto'}), + function click(e: MouseEvent) { + e.stopPropagation() + }, + kids, + ) + ) + document.body.appendChild(root) + content.focus() + return close +} + const localStorageGet = (k: string): string | null => { try { return window.localStorage.getItem(k) @@ -284,9 +326,10 @@ const formatSize = (n: number) => { } const index = async () => { - const [domains, queueSize, checkUpdatesEnabled, accounts] = await Promise.all([ + const [domains, queueSize, hooksQueueSize, checkUpdatesEnabled, accounts] = await Promise.all([ client.Domains(), client.QueueSize(), + client.HookQueueSize(), client.CheckUpdatesEnabled(), client.Accounts(), ]) @@ -306,6 +349,7 @@ const index = async () => { dom.p( dom.a('Accounts', attr.href('#accounts')), dom.br(), dom.a('Queue', attr.href('#queue')), ' ('+queueSize+')', dom.br(), + dom.a('Webhook queue', attr.href('#webhookqueue')), ' ('+hooksQueueSize+')', dom.br(), ), dom.h2('Domains'), (domains || []).length === 0 ? box(red, 'No domains') : @@ -588,6 +632,8 @@ const account = async (name: string) => { client.Domains(), ]) + // todo: show suppression list, and buttons to add/remove entries. + let form: HTMLFormElement let fieldset: HTMLFieldSetElement let localpart: HTMLInputElement @@ -1224,7 +1270,7 @@ const dmarcEvaluations = async () => { dom.br(), dom.br(), dom.h2('Suppressed reporting addresses'), - dom.p('In practice, sending a DMARC report to a reporting address can cause DSN to be sent back. Such addresses can be added to a supression list for a period, to reduce noise in the postmaster mailbox.'), + dom.p('In practice, sending a DMARC report to a reporting address can cause DSN to be sent back. Such addresses can be added to a suppression list for a period, to reduce noise in the postmaster mailbox.'), dom.form( async function submit(e: SubmitEvent) { e.stopPropagation() @@ -2139,17 +2185,21 @@ const dnsbl = async () => { } const queueList = async () => { - let [holdRules, msgs, transports] = await Promise.all([ + let filter: api.Filter = {Max: parseInt(localStorageGet('adminpaginationsize') || '') || 100, IDs: [], Account: '', From: '', To: '', Hold: null, Submitted: '', NextAttempt: '', Transport: null} + let sort: api.Sort = {Field: "NextAttempt", LastID: 0, Last: null, Asc: true} + let [holdRules, msgs0, transports] = await Promise.all([ client.QueueHoldRuleList(), - client.QueueList({IDs: [], Account: '', From: '', To: '', Hold: null, Submitted: '', NextAttempt: '', Transport: null}), + client.QueueList(filter, sort), client.Transports(), ]) + let msgs: api.Msg[] = msgs0 || [] - // todo: sorting by address/timestamps/attempts. + // todo: more sorting // todo: after making changes, don't reload entire page. probably best to fetch messages by id and rerender. also report on which messages weren't affected (e.g. no longer in queue). // todo: display which transport will be used for a message according to routing rules (in case none is explicitly configured). // todo: live updates with SSE connections // todo: keep updating times/age. + // todo: reuse this code in webaccount to show users their own message queue, and give (more limited) options to fail/reschedule deliveries. const nowSecs = new Date().getTime()/1000 @@ -2158,6 +2208,7 @@ const queueList = async () => { let holdRuleRecipientDomain: HTMLInputElement let holdRuleSubmit: HTMLButtonElement + let sortElem: HTMLSelectElement let filterForm: HTMLFormElement let filterAccount: HTMLInputElement let filterFrom: HTMLInputElement @@ -2178,6 +2229,7 @@ const queueList = async () => { // syntax when calling this as parameter in api client calls below. const gatherIDs = () => { const f: api.Filter = { + Max: 0, IDs: Array.from(toggles.entries()).filter(t => t[1].checked).map(t => t[0]), Account: '', From: '', @@ -2194,17 +2246,50 @@ const queueList = async () => { return f } - const tbody = dom.tbody() + const popupDetails = (m: api.Msg) => { + const nowSecs = new Date().getTime()/1000 + popup( + dom.h1('Details'), + dom.table( + dom.tr(dom.td('Message subject'), dom.td(m.Subject)), + ), + dom.br(), + dom.h2('Results'), + dom.table( + dom.thead( + dom.tr( + dom.th('Start'), dom.th('Duration'), dom.th('Success'), dom.th('Code'), dom.th('Secode'), dom.th('Error'), + ), + ), + dom.tbody( + (m.Results || []).length === 0 ? dom.tr(dom.td(attr.colspan('6'), 'No results.')) : [], + (m.Results || []).map(r => + dom.tr( + dom.td(age(r.Start, false, nowSecs)), + dom.td(Math.round(r.Duration/1000000)+'ms'), + dom.td(r.Success ? '✓' : ''), + dom.td(''+ (r.Code || '')), + dom.td(r.Secode), + dom.td(r.Error), + ) + ), + ), + ), + ) + } + + let tbody = dom.tbody() const render = () => { toggles = new Map() - for (const m of (msgs || [])) { - toggles.set(m.ID, dom.input(attr.type('checkbox'), attr.checked(''), )) + for (const m of msgs) { + toggles.set(m.ID, dom.input(attr.type('checkbox'), msgs.length === 1 ? attr.checked('') : [], )) } - dom._kids(tbody, - (msgs || []).length === 0 ? dom.tr(dom.td(attr.colspan('14'), 'No messages.')) : [], - (msgs || []).map(m => { + const ntbody = dom.tbody( + dom._class('loadend'), + msgs.length === 0 ? dom.tr(dom.td(attr.colspan('15'), 'No messages.')) : [], + msgs.map(m => { return dom.tr( dom.td(toggles.get(m.ID)!), dom.td(''+m.ID + (m.BaseID > 0 ? '/'+m.BaseID : '')), @@ -2217,12 +2302,19 @@ const queueList = async () => { dom.td(m.Hold ? 'Hold' : ''), dom.td(age(new Date(m.NextAttempt), true, nowSecs)), dom.td(m.LastAttempt ? age(new Date(m.LastAttempt), false, nowSecs) : '-'), - dom.td(m.LastError || '-'), - dom.td(m.RequireTLS === true ? 'Yes' : (m.RequireTLS === false ? 'No' : 'Default')), + dom.td(m.Results && m.Results.length > 0 ? m.Results[m.Results.length-1].Error : []), dom.td(m.Transport || '(default)'), + dom.td(m.RequireTLS === true ? 'Yes' : (m.RequireTLS === false ? 'No' : '')), + dom.td( + dom.clickbutton('Details', function click() { + popupDetails(m) + }), + ), ) }), ) + tbody.replaceWith(ntbody) + tbody = ntbody } render() @@ -2244,6 +2336,7 @@ const queueList = async () => { 'Queue', ), + dom.p(dom.a(attr.href('#queue/retired'), 'Retired messages')), dom.h2('Hold rules', attr.title('Messages submitted to the queue that match a hold rule are automatically marked as "on hold", preventing delivery until explicitly taken off hold again.')), dom.form( attr.id('holdRuleForm'), @@ -2326,7 +2419,8 @@ const queueList = async () => { e.preventDefault() e.stopPropagation() - const filter: api.Filter = { + filter = { + Max: filter.Max, IDs: [], Account: filterAccount.value, From: filterFrom.value, @@ -2336,17 +2430,86 @@ const queueList = async () => { NextAttempt: filterNextAttempt.value, Transport: !filterTransport.value ? null : (filterTransport.value === '(default)' ? '' : filterTransport.value), } - dom._kids(tbody) - msgs = await check({disabled: false}, client.QueueList(filter)) + sort = { + Field: sortElem.value.startsWith('nextattempt') ? 'NextAttempt' : 'Queued', + LastID: 0, + Last: null, + Asc: sortElem.value.endsWith('asc'), + } + tbody.classList.add('loadstart') + msgs = await check({disabled: false}, client.QueueList(filter, sort)) || [] render() }, ), dom.h2('Messages'), dom.table(dom._class('hover'), + style({width: '100%'}), dom.thead( dom.tr( - dom.th(), + dom.td(attr.colspan('2'), 'Filter'), + dom.td(filterSubmitted=dom.input(attr.form('queuefilter'), style({width: '7em'}), attr.title('Example: "<-1h" for filtering messages submitted more than 1 hour ago.'))), + dom.td(filterAccount=dom.input(attr.form('queuefilter'))), + dom.td(filterFrom=dom.input(attr.form('queuefilter')), attr.title('Example: "@sender.example" to filter by domain of sender.')), + dom.td(filterTo=dom.input(attr.form('queuefilter')), attr.title('Example: "@recipient.example" to filter by domain of recipient.')), + dom.td(), // todo: add filter by size? + dom.td(), // todo: add filter by attempts? + dom.td( + filterHold=dom.select( + attr.form('queuefilter'), + function change() { + filterForm.requestSubmit() + }, + dom.option('', attr.value('')), + dom.option('Yes'), + dom.option('No'), + ), + ), + dom.td(filterNextAttempt=dom.input(attr.form('queuefilter'), style({width: '7em'}), attr.title('Example: ">1h" for filtering messages to be delivered in more than 1 hour, or " dom.option(t)) + ), + ), + dom.td( + attr.colspan('2'), + style({textAlign: 'right'}), // Less content shifting while rendering. + 'Sort ', + sortElem=dom.select( + attr.form('queuefilter'), + function change() { + filterForm.requestSubmit() + }, + dom.option('Next attempt ↑', attr.value('nextattempt-asc')), + dom.option('Next attempt ↓', attr.value('nextattempt-desc')), + dom.option('Submitted ↑', attr.value('submitted-asc')), + dom.option('Submitted ↓', attr.value('submitted-desc')), + ), ' ', + dom.submitbutton('Apply', attr.form('queuefilter')), ' ', + dom.clickbutton('Reset', attr.form('queuefilter'), function click() { + filterForm.reset() + filterForm.requestSubmit() + }), + ), + ), + dom.tr( + dom.td( + dom.input(attr.type('checkbox'), msgs.length === 1 ? attr.checked('') : [], attr.form('queuefilter'), function change(e: MouseEvent) { + const elem = e.target! as HTMLInputElement + for (const [_, toggle] of toggles) { + toggle.checked = elem.checked + } + }), + ), dom.th('ID'), dom.th('Submitted'), dom.th('Account'), @@ -2358,186 +2521,950 @@ const queueList = async () => { dom.th('Next attempt'), dom.th('Last attempt'), dom.th('Last error'), - dom.th('Require TLS'), dom.th('Transport'), - dom.th(), + dom.th('Require TLS'), + dom.th('Actions'), ), + ), + tbody, + dom.tfoot( dom.tr( dom.td( - dom.input(attr.type('checkbox'), attr.checked(''), attr.form('queuefilter'), function change(e: MouseEvent) { - const elem = e.target! as HTMLInputElement - for (const [_, toggle] of toggles) { - toggle.checked = elem.checked + attr.colspan('15'), + // todo: consider implementing infinite scroll, autoloading more pages. means the operations on selected messages should be moved from below to above the table. and probably only show them when at least one message is selected to prevent clutter. + dom.clickbutton('Load more', attr.title('Try to load more entries. You can still try to load more entries when at the end of the list, new entries may have been appended since the previous call.'), async function click(e: MouseEvent) { + if (msgs.length === 0) { + sort.LastID = 0 + sort.Last = null + } else { + const lm = msgs[msgs.length-1] + sort.LastID = lm.ID + if (sort.Field === "Queued") { + sort.Last = lm.Queued + } else { + sort.Last = lm.NextAttempt + } } + tbody.classList.add('loadstart') + const l = await check(e.target! as HTMLButtonElement, client.QueueList(filter, sort)) || [] + msgs.push(...l) + render() }), ), - dom.td(), - dom.td(filterSubmitted=dom.input(attr.form('queuefilter'), style({width: '7em'}), attr.title('Example: "<1h" for filtering messages submitted more than 1 minute ago.'))), + ), + ), + ), + dom.br(), + dom.br(), + dom.div( + dom._class('unclutter'), + dom.h2('Change selected messages'), + dom.div( + style({display: 'flex', gap: '2em'}), + dom.div( + dom.div('Hold'), + dom.div( + dom.clickbutton('On', async function click(e: MouseEvent) { + const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueHoldSet(gatherIDs(), true))()) + window.alert(''+n+' message(s) updated') + window.location.reload() // todo: reload less + }), ' ', + dom.clickbutton('Off', async function click(e: MouseEvent) { + const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueHoldSet(gatherIDs(), false))()) + window.alert(''+n+' message(s) updated') + window.location.reload() // todo: reload less + }), + ), + ), + dom.div( + dom.div('Schedule next delivery attempt'), + buttonNextAttemptSet('Now', 0), ' ', + dom.clickbutton('More...', function click(e: MouseEvent) { + (e.target! as HTMLButtonElement).replaceWith( + dom.div( + dom.br(), + dom.div('Scheduled time plus'), + dom.div( + buttonNextAttemptAdd('1m', 1), ' ', + buttonNextAttemptAdd('5m', 5), ' ', + buttonNextAttemptAdd('30m', 30), ' ', + buttonNextAttemptAdd('1h', 60), ' ', + buttonNextAttemptAdd('2h', 2*60), ' ', + buttonNextAttemptAdd('4h', 4*60), ' ', + buttonNextAttemptAdd('8h', 8*60), ' ', + buttonNextAttemptAdd('16h', 16*60), ' ', + ), + dom.br(), + dom.div('Now plus'), + dom.div( + buttonNextAttemptSet('1m', 1), ' ', + buttonNextAttemptSet('5m', 5), ' ', + buttonNextAttemptSet('30m', 30), ' ', + buttonNextAttemptSet('1h', 60), ' ', + buttonNextAttemptSet('2h', 2*60), ' ', + buttonNextAttemptSet('4h', 4*60), ' ', + buttonNextAttemptSet('8h', 8*60), ' ', + buttonNextAttemptSet('16h', 16*60), ' ', + ) + ) + ) + }), + ), + dom.div( + dom.form( + dom.label('Require TLS'), + requiretlsFieldset=dom.fieldset( + requiretls=dom.select( + attr.title('How to use TLS for message delivery over SMTP:\n\nDefault: Delivery attempts follow the policies published by the recipient domain: Verification with MTA-STS and/or DANE, or optional opportunistic unverified STARTTLS if the domain does not specify a policy.\n\nWith RequireTLS: For sensitive messages, you may want to require verified TLS. The recipient destination domain SMTP server must support the REQUIRETLS SMTP extension for delivery to succeed. It is automatically chosen when the destination domain mail servers of all recipients are known to support it.\n\nFallback to insecure: If delivery fails due to MTA-STS and/or DANE policies specified by the recipient domain, and the content is not sensitive, you may choose to ignore the recipient domain TLS policies so delivery can succeed.'), + dom.option('Default', attr.value('')), + dom.option('With RequireTLS', attr.value('yes')), + dom.option('Fallback to insecure', attr.value('no')), + ), + ' ', + dom.submitbutton('Change'), + ), + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + const n = await check(requiretlsFieldset, (async () => await client.QueueRequireTLSSet(gatherIDs(), requiretls.value === '' ? null : requiretls.value === 'yes'))()) + window.alert(''+n+' message(s) updated') + window.location.reload() // todo: only refresh the list + } + ), + ), + dom.div( + dom.form( + dom.label('Transport'), + dom.fieldset( + transport=dom.select( + attr.title('Transport to use for delivery attempts. The default is direct delivery, connecting to the MX hosts of the domain.'), + dom.option('(default)', attr.value('')), + Object.keys(transports || []).sort().map(t => dom.option(t)), + ), + ' ', + dom.submitbutton('Change'), + ), + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueTransportSet(gatherIDs(), transport.value))()) + window.alert(''+n+' message(s) updated') + window.location.reload() // todo: only refresh the list + } + ), + ), + dom.div( + dom.div('Delivery'), + dom.clickbutton('Fail delivery', attr.title('Cause delivery to fail, sending a DSN to the sender.'), async function click(e: MouseEvent) { + e.preventDefault() + if (!window.confirm('Are you sure you want to fail delivery for the selected message(s)? Notifications of delivery failure will be sent (DSNs).')) { + return + } + const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueFail(gatherIDs()))()) + window.alert(''+n+' message(s) updated') + window.location.reload() // todo: only refresh the list + }), + ), + dom.div( + dom.div('Messages'), + dom.clickbutton('Remove', attr.title('Completely remove messages from queue, not sending a DSN.'), async function click(e: MouseEvent) { + e.preventDefault() + if (!window.confirm('Are you sure you want to fail delivery for the selected message(s)? It will be removed completely, no DSN about failure to deliver will be sent.')) { + return + } + const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueDrop(gatherIDs()))()) + window.alert(''+n+' message(s) updated') + window.location.reload() // todo: only refresh the list + }), + ), + ), + ), + ) +} + +const retiredList = async () => { + let filter: api.RetiredFilter = {Max: parseInt(localStorageGet('adminpaginationsize') || '') || 100, IDs: [], Account: '', From: '', To: '', Submitted: '', LastActivity: '', Transport: null} + let sort: api.RetiredSort = {Field: "LastActivity", LastID: 0, Last: null, Asc: false} + const [retired0, transports0] = await Promise.all([ + client.RetiredList(filter, sort), + client.Transports(), + ]) + let retired: api.MsgRetired[] = retired0 || [] + let transports: { [key: string]: api.Transport } = transports0 || {} + + const nowSecs = new Date().getTime()/1000 + + let sortElem: HTMLSelectElement + let filterForm: HTMLFormElement + let filterAccount: HTMLInputElement + let filterFrom: HTMLInputElement + let filterTo: HTMLInputElement + let filterSubmitted: HTMLInputElement + let filterLastActivity: HTMLInputElement + let filterTransport: HTMLSelectElement + let filterSuccess: HTMLSelectElement + + const popupDetails = (m: api.MsgRetired) => { + const nowSecs = new Date().getTime()/1000 + popup( + dom.h1('Details'), + dom.table( + dom.tr(dom.td('Message subject'), dom.td(m.Subject)), + ), + dom.br(), + dom.h2('Results'), + dom.table( + dom.thead( + dom.tr( + dom.th('Start'), dom.th('Duration'), dom.th('Success'), dom.th('Code'), dom.th('Secode'), dom.th('Error'), + ), + ), + dom.tbody( + (m.Results || []).length === 0 ? dom.tr(dom.td(attr.colspan('6'), 'No results.')) : [], + (m.Results || []).map(r => + dom.tr( + dom.td(age(r.Start, false, nowSecs)), + dom.td(Math.round(r.Duration/1000000)+'ms'), + dom.td(r.Success ? '✓' : ''), + dom.td(''+ (r.Code || '')), + dom.td(r.Secode), + dom.td(r.Error), + ) + ), + ), + ), + ) + } + + let tbody = dom.tbody() + + const render = () => { + const ntbody = dom.tbody( + dom._class('loadend'), + retired.length === 0 ? dom.tr(dom.td(attr.colspan('14'), 'No retired messages.')) : [], + retired.map(m => + dom.tr( + dom.td(''+m.ID + (m.BaseID > 0 ? '/'+m.BaseID : '')), + dom.td(m.Success ? '✓' : ''), + dom.td(age(new Date(m.LastActivity), false, nowSecs)), + dom.td(age(new Date(m.Queued), false, nowSecs)), + dom.td(m.SenderAccount || '-'), + dom.td(m.SenderLocalpart+"@"+m.SenderDomainStr), // todo: escaping of localpart + dom.td(m.RecipientLocalpart+"@"+m.RecipientDomainStr), // todo: escaping of localpart + dom.td(formatSize(m.Size)), + dom.td(''+m.Attempts), + dom.td(m.LastAttempt ? age(new Date(m.LastAttempt), false, nowSecs) : '-'), + dom.td(m.Results && m.Results.length > 0 ? m.Results[m.Results.length-1].Error : []), + dom.td(m.Transport || ''), + dom.td(m.RequireTLS === true ? 'Yes' : (m.RequireTLS === false ? 'No' : '')), + dom.td( + dom.clickbutton('Details', function click() { + popupDetails(m) + }), + ), + ) + ), + ) + tbody.replaceWith(ntbody) + tbody = ntbody + } + render() + + dom._kids(page, + crumbs( + crumblink('Mox Admin', '#'), + crumblink('Queue', '#queue'), + 'Retired messages', + ), + + // Filtering. + filterForm=dom.form( + attr.id('queuefilter'), // Referenced by input elements in table row. + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + + filter = { + Max: filter.Max, + IDs: [], + Account: filterAccount.value, + From: filterFrom.value, + To: filterTo.value, + Submitted: filterSubmitted.value, + LastActivity: filterLastActivity.value, + Transport: !filterTransport.value ? null : (filterTransport.value === '(default)' ? '' : filterTransport.value), + Success: filterSuccess.value === '' ? null : (filterSuccess.value === 'Yes' ? true : false), + } + sort = { + Field: sortElem.value.startsWith('lastactivity') ? 'LastActivity' : 'Queued', + LastID: 0, + Last: null, + Asc: sortElem.value.endsWith('asc'), + } + tbody.classList.add('loadstart') + retired = await check({disabled: false}, client.RetiredList(filter, sort)) || [] + render() + }, + ), + + dom.h2('Retired messages'), + dom.p('Meta information about queued messages may be kept after successful and/or failed delivery, configurable per account.'), + dom.table(dom._class('hover'), + style({width: '100%'}), + dom.thead( + dom.tr( + dom.td('Filter'), + dom.td( + filterSuccess=dom.select( + attr.form('queuefilter'), + function change() { + filterForm.requestSubmit() + }, + dom.option(''), + dom.option('Yes'), + dom.option('No'), + ), + ), + dom.td(filterLastActivity=dom.input(attr.form('queuefilter'), style({width: '7em'}), attr.title('Example: ">-1h" for filtering messages with last activity less than 1 hour ago.'))), + dom.td(filterSubmitted=dom.input(attr.form('queuefilter'), style({width: '7em'}), attr.title('Example: "<-1h" for filtering messages submitted more than 1 hour ago.'))), dom.td(filterAccount=dom.input(attr.form('queuefilter'))), dom.td(filterFrom=dom.input(attr.form('queuefilter')), attr.title('Example: "@sender.example" to filter by domain of sender.')), dom.td(filterTo=dom.input(attr.form('queuefilter')), attr.title('Example: "@recipient.example" to filter by domain of recipient.')), dom.td(), // todo: add filter by size? dom.td(), // todo: add filter by attempts? - dom.td( - filterHold=dom.select( - attr.form('queuefilter'), - dom.option('', attr.value('')), - dom.option('Yes'), - dom.option('No'), - function change() { - filterForm.requestSubmit() - }, - ), - ), - dom.td(filterNextAttempt=dom.input(attr.form('queuefilter'), style({width: '7em'}), attr.title('Example: ">1h" for filtering messages to be delivered in more than 1 hour, or " dom.option(t)) + Object.keys(transports).sort().map(t => dom.option(t)) ), ), dom.td( - dom.submitbutton('Filter', attr.form('queuefilter')), ' ', + attr.colspan('2'), + style({textAlign: 'right'}), // Less content shifting while rendering. + 'Sort ', + sortElem=dom.select( + attr.form('queuefilter'), + function change() { + filterForm.requestSubmit() + }, + dom.option('Last activity ↓', attr.value('lastactivity-desc')), + dom.option('Last activity ↑', attr.value('lastactivity-asc')), + dom.option('Submitted ↓', attr.value('submitted-desc')), + dom.option('Submitted ↑', attr.value('submitted-asc')), + ), ' ', + dom.submitbutton('Apply', attr.form('queuefilter')), ' ', dom.clickbutton('Reset', attr.form('queuefilter'), function click() { filterForm.reset() filterForm.requestSubmit() }), ), ), + dom.tr( + dom.th('ID'), + dom.th('Success'), + dom.th('Last activity'), + dom.th('Submitted'), + dom.th('Account'), + dom.th('From'), + dom.th('To'), + dom.th('Size'), + dom.th('Attempts'), + dom.th('Last attempt'), + dom.th('Last error'), + dom.th('Require TLS'), + dom.th('Transport'), + dom.th('Actions'), + ), ), tbody, - ), - dom.br(), - dom.br(), - dom.h2('Change selected messages'), - dom.div( - style({display: 'flex', gap: '2em'}), - dom.div( - dom.div('Hold'), - dom.div( - dom.clickbutton('On', async function click(e: MouseEvent) { - const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueHoldSet(gatherIDs(), true))()) - window.alert(''+n+' message(s) updated') - window.location.reload() // todo: reload less - }), ' ', - dom.clickbutton('Off', async function click(e: MouseEvent) { - const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueHoldSet(gatherIDs(), false))()) - window.alert(''+n+' message(s) updated') - window.location.reload() // todo: reload less - }), + dom.tfoot( + dom.tr( + dom.td( + attr.colspan('14'), + dom.clickbutton('Load more', attr.title('Try to load more entries. You can still try to load more entries when at the end of the list, new entries may have been appended since the previous call.'), async function click(e: MouseEvent) { + if (retired.length === 0) { + sort.LastID = 0 + sort.Last = null + } else { + const lm = retired[retired.length-1] + sort.LastID = lm.ID + if (sort.Field === "Queued") { + sort.Last = lm.Queued + } else { + sort.Last = lm.LastActivity + } + } + tbody.classList.add('loadstart') + const l = await check(e.target! as HTMLButtonElement, client.RetiredList(filter, sort)) || [] + retired.push(...l) + render() + }), + ), ), ), + ), + ) +} + +const formatExtra = (extra: { [key: string]: string; } | undefined) => { + if (!extra) { + return '' + } + return Object.entries(extra).sort((a, b) => a[0] < b[0] ? -1 : 1).map(t => t[0]+': '+t[1]).join('; ') +} + +const hooksList = async () => { + let filter: api.HookFilter = {Max: parseInt(localStorageGet('adminpaginationsize') || '') || 100, IDs: [], Account: '', Submitted: '', NextAttempt: '', Event: ''} + let sort: api.HookSort = {Field: "NextAttempt", LastID: 0, Last: null, Asc: true} + let hooks: api.Hook[] = await client.HookList(filter, sort) || [] + + const nowSecs = new Date().getTime()/1000 + + let sortElem: HTMLSelectElement + let filterForm: HTMLFormElement + let filterSubmitted: HTMLInputElement + let filterAccount: HTMLInputElement + let filterEvent: HTMLSelectElement + let filterNextAttempt: HTMLInputElement + + // Hook ID to checkbox. + let toggles = new Map() + + // We operate on what the user has selected, not what the filters would currently + // evaluate to. This function can throw an error, which is why we have awkward + // syntax when calling this as parameter in api client calls below. + const gatherIDs = () => { + const f: api.HookFilter = { + Max: 0, + IDs: Array.from(toggles.entries()).filter(t => t[1].checked).map(t => t[0]), + Account: '', + Event: '', + Submitted: '', + NextAttempt: '', + } + // Don't want to accidentally operate on all messages. + if ((f.IDs || []).length === 0) { + throw new Error('No hooks selected.') + } + return f + } + + const popupDetails = (h: api.Hook) => { + const nowSecs = new Date().getTime()/1000 + popup( + dom.h1('Details'), dom.div( - dom.div('Schedule next delivery attempt'), - buttonNextAttemptSet('Now', 0), ' ', - dom.clickbutton('More...', function click(e: MouseEvent) { - (e.target! as HTMLButtonElement).replaceWith( - dom.div( - dom.br(), - dom.div('Scheduled time plus'), - dom.div( - buttonNextAttemptAdd('1m', 1), ' ', - buttonNextAttemptAdd('5m', 5), ' ', - buttonNextAttemptAdd('30m', 30), ' ', - buttonNextAttemptAdd('1h', 60), ' ', - buttonNextAttemptAdd('2h', 2*60), ' ', - buttonNextAttemptAdd('4h', 4*60), ' ', - buttonNextAttemptAdd('8h', 8*60), ' ', - buttonNextAttemptAdd('16h', 16*60), ' ', + dom._class('twocols'), + dom.div( + dom.table( + dom.tr(dom.td('Message subject'), dom.td(h.Subject)), + ), + dom.br(), + dom.h2('Results'), + dom.table( + dom.thead( + dom.tr( + dom.th('Start'), dom.th('Duration'), dom.th('Success'), dom.th('Code'), dom.th('Error'), dom.th('URL'), dom.th('Response'), ), - dom.br(), - dom.div('Now plus'), + ), + dom.tbody( + (h.Results || []).length === 0 ? dom.tr(dom.td(attr.colspan('7'), 'No results.')) : [], + (h.Results || []).map(r => + dom.tr( + dom.td(age(r.Start, false, nowSecs)), + dom.td(Math.round(r.Duration/1000000)+'ms'), + dom.td(r.Success ? '✓' : ''), + dom.td(''+ (r.Code || '')), + dom.td(r.Error), + dom.td(r.URL), + dom.td(r.Response), + ) + ), + ), + ), + dom.br(), + ), + dom.div( + dom.h2('Webhook JSON body'), + dom.pre(dom._class('literal'), JSON.stringify(JSON.parse(h.Payload), undefined, '\t')), + ), + ), + ) + } + + let tbody = dom.tbody() + + const render = () => { + toggles = new Map() + for (const h of (hooks || [])) { + toggles.set(h.ID, dom.input(attr.type('checkbox'), (hooks || []).length === 1 ? attr.checked('') : [], )) + } + + const ntbody = dom.tbody( + dom._class('loadend'), + hooks.length === 0 ? dom.tr(dom.td(attr.colspan('15'), 'No webhooks.')) : [], + hooks.map(h => + dom.tr( + dom.td(toggles.get(h.ID)!), + dom.td(''+h.ID), + dom.td(age(new Date(h.Submitted), false, nowSecs)), + dom.td(''+(h.QueueMsgID || '')), // todo future: make it easy to open the corresponding (retired) message from queue (if still around). + dom.td(''+h.FromID), + dom.td(''+h.MessageID), + dom.td(h.Account || '-'), + dom.td(h.IsIncoming ? "incoming" : h.OutgoingEvent), + dom.td(formatExtra(h.Extra)), + dom.td(''+h.Attempts), + dom.td(age(h.NextAttempt, true, nowSecs)), + dom.td(h.Results && h.Results.length > 0 ? age(h.Results[h.Results.length-1].Start, false, nowSecs) : []), + dom.td(h.Results && h.Results.length > 0 ? h.Results[h.Results.length-1].Error : []), + dom.td(h.URL), + dom.td( + dom.clickbutton('Details', function click() { + popupDetails(h) + }), + ), + ) + ), + ) + tbody.replaceWith(ntbody) + tbody = ntbody + } + render() + + const buttonNextAttemptSet = (text: string, minutes: number) => dom.clickbutton(text, async function click(e: MouseEvent) { + // note: awkward client call because gatherIDs() can throw an exception. + const n = await check(e.target! as HTMLButtonElement, (async () => client.HookNextAttemptSet(gatherIDs(), minutes))()) + window.alert(''+n+' hook(s) updated') + window.location.reload() // todo: reload less + }) + const buttonNextAttemptAdd = (text: string, minutes: number) => dom.clickbutton(text, async function click(e: MouseEvent) { + const n = await check(e.target! as HTMLButtonElement, (async () => client.HookNextAttemptAdd(gatherIDs(), minutes))()) + window.alert(''+n+' hook(s) updated') + window.location.reload() // todo: reload less + }) + + dom._kids(page, + crumbs( + crumblink('Mox Admin', '#'), + 'Webhook queue', + ), + + dom.p(dom.a(attr.href('#webhookqueue/retired'), 'Retired webhooks')), + dom.h2('Webhooks'), + dom.table(dom._class('hover'), + style({width: '100%'}), + dom.thead( + dom.tr( + dom.td(attr.colspan('2'), 'Filter'), + dom.td(filterSubmitted=dom.input(attr.form('hooksfilter'), style({width: '7em'}), attr.title('Example: "<-1h" for filtering webhooks submitted more than 1 hour ago.'))), + dom.td(), + dom.td(), + dom.td(), + dom.td(filterAccount=dom.input(attr.form('hooksfilter'), style({width: '8em'}))), + dom.td( + filterEvent=dom.select( + attr.form('hooksfilter'), + function change() { + filterForm.requestSubmit() + }, + dom.option(''), + // note: outgoing hook events are in ../webhook/webhook.go, ../mox-/config.go ../webadmin/admin.ts and ../webapi/gendoc.sh. keep in sync. + ['incoming', 'delivered', 'suppressed', 'delayed', 'failed', 'relayed', 'expanded', 'canceled', 'unrecognized'].map(s => dom.option(s)), + ), + ), + dom.td(), + dom.td(), + dom.td(filterNextAttempt=dom.input(attr.form('hooksfilter'), style({width: '7em'}), attr.title('Example: ">1h" for filtering webhooks to be delivered in more than 1 hour, or " await client.QueueRequireTLSSet(gatherIDs(), requiretls.value === '' ? null : requiretls.value === 'yes'))()) - window.alert(''+n+' message(s) updated') + if (!window.confirm('Are you sure you want to cancel these webhooks?')) { + return + } + const n = await check(e.target! as HTMLButtonElement, (async () => await client.HookCancel(gatherIDs()))()) + window.alert(''+n+' webhook(s) updated') window.location.reload() // todo: only refresh the list - } + }), + ), + ) + ) + ) +} + +const hooksRetiredList = async () => { + let filter: api.HookRetiredFilter = {Max: parseInt(localStorageGet('adminpaginationsize') || '') || 100, IDs: [], Account: '', Submitted: '', LastActivity: '', Event: ''} + let sort: api.HookRetiredSort = {Field: "LastActivity", LastID: 0, Last: null, Asc: false} + let hooks = await client.HookRetiredList(filter, sort) || [] + + const nowSecs = new Date().getTime()/1000 + + let sortElem: HTMLSelectElement + let filterForm: HTMLFormElement + let filterSubmitted: HTMLInputElement + let filterAccount: HTMLInputElement + let filterEvent: HTMLSelectElement + let filterLastActivity: HTMLInputElement + + const popupDetails = (h: api.HookRetired) => { + const nowSecs = new Date().getTime()/1000 + popup( + dom.h1('Details'), + dom.div( + dom._class('twocols'), + dom.div( + dom.table( + dom.tr(dom.td('Message subject'), dom.td(h.Subject)), + h.SupersededByID != 0 ? dom.tr(dom.td('Superseded by webhook ID'), dom.td(''+h.SupersededByID)) : [], + ), + dom.br(), + dom.h2('Results'), + dom.table( + dom.thead( + dom.tr( + dom.th('Start'), dom.th('Duration'), dom.th('Success'), dom.th('Code'), dom.th('Error'), dom.th('URL'), dom.th('Response'), + ), + ), + dom.tbody( + (h.Results || []).length === 0 ? dom.tr(dom.td(attr.colspan('7'), 'No results.')) : [], + (h.Results || []).map(r => + dom.tr( + dom.td(age(r.Start, false, nowSecs)), + dom.td(Math.round(r.Duration/1000000)+'ms'), + dom.td(r.Success ? '✓' : ''), + dom.td(''+ (r.Code || '')), + dom.td(r.Error), + dom.td(r.URL), + dom.td(r.Response), + ) + ), + ), + ), + dom.br(), + ), + dom.div( + dom.h2('Webhook JSON body'), + dom.pre(dom._class('literal'), JSON.stringify(JSON.parse(h.Payload), undefined, '\t')), ), ), - dom.div( - dom.form( - dom.label('Transport'), - dom.fieldset( - transport=dom.select( - attr.title('Transport to use for delivery attempts. The default is direct delivery, connecting to the MX hosts of the domain.'), - dom.option('(default)', attr.value('')), - Object.keys(transports || []).sort().map(t => dom.option(t)), - ), - ' ', - dom.submitbutton('Change'), + ) + } + + let tbody = dom.tbody() + + // todo future: add selection + button to reschedule old retired webhooks. + + const render = () => { + const ntbody = dom.tbody( + dom._class('loadend'), + hooks.length === 0 ? dom.tr(dom.td(attr.colspan('14'), 'No retired webhooks.')) : [], + hooks.map(h => + dom.tr( + dom.td(''+h.ID), + dom.td(h.Success ? '✓' : ''), + dom.td(age(h.LastActivity, false, nowSecs)), + dom.td(age(new Date(h.Submitted), false, nowSecs)), + dom.td(''+(h.QueueMsgID || '')), + dom.td(''+h.FromID), + dom.td(''+h.MessageID), + dom.td(h.Account || '-'), + dom.td(h.IsIncoming ? "incoming" : h.OutgoingEvent), + dom.td(formatExtra(h.Extra)), + dom.td(''+h.Attempts), + dom.td(h.Results && h.Results.length > 0 ? h.Results[h.Results.length-1].Error : []), + dom.td(h.URL), + dom.td( + dom.clickbutton('Details', function click() { + popupDetails(h) + }), ), - async function submit(e: SubmitEvent) { - e.preventDefault() - e.stopPropagation() - const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueTransportSet(gatherIDs(), transport.value))()) - window.alert(''+n+' message(s) updated') - window.location.reload() // todo: only refresh the list - } + ) + ), + ) + tbody.replaceWith(ntbody) + tbody = ntbody + } + render() + + dom._kids(page, + crumbs( + crumblink('Mox Admin', '#'), + crumblink('Webhook queue', '#webhookqueue'), + 'Retired webhooks', + ), + + dom.h2('Retired webhooks'), + dom.table(dom._class('hover'), + style({width: '100%'}), + dom.thead( + dom.tr( + dom.td('Filter'), + dom.td(), + dom.td(filterLastActivity=dom.input(attr.form('hooksfilter'), style({width: '7em'}), attr.title('Example: ">-1h" for filtering last activity for webhooks more than 1 hour ago.'))), + dom.td(filterSubmitted=dom.input(attr.form('hooksfilter'), style({width: '7em'}), attr.title('Example: "<-1h" for filtering webhooks submitted more than 1 hour ago.'))), + dom.td(), + dom.td(), + dom.td(), + dom.td(filterAccount=dom.input(attr.form('hooksfilter'), style({width: '8em'}))), + dom.td( + filterEvent=dom.select( + attr.form('hooksfilter'), + function change() { + filterForm.requestSubmit() + }, + dom.option(''), + // note: outgoing hook events are in ../webhook/webhook.go, ../mox-/config.go ../webadmin/admin.ts and ../webapi/gendoc.sh. keep in sync. + ['incoming', 'delivered', 'suppressed', 'delayed', 'failed', 'relayed', 'expanded', 'canceled', 'unrecognized'].map(s => dom.option(s)), + ), + ), + dom.td(), + dom.td(), + dom.td(), + dom.td( + attr.colspan('2'), + style({textAlign: 'right'}), // Less content shifting while rendering. + 'Sort ', + sortElem=dom.select( + attr.form('hooksfilter'), + function change() { + filterForm.requestSubmit() + }, + dom.option('Last activity ↓', attr.value('nextattempt-desc')), + dom.option('Last activity ↑', attr.value('nextattempt-asc')), + dom.option('Submitted ↓', attr.value('submitted-desc')), + dom.option('Submitted ↑', attr.value('submitted-asc')), + ), ' ', + dom.submitbutton('Apply', attr.form('hooksfilter')), ' ', + dom.clickbutton('Reset', attr.form('hooksfilter'), function click() { + filterForm.reset() + filterForm.requestSubmit() + }), + ), + ), + dom.tr( + dom.th('ID'), + dom.th('Success'), + dom.th('Last'), + dom.th('Submitted'), + dom.th('Queue Msg ID', attr.title('ID of queued message this event is about.')), + dom.th('FromID'), + dom.th('MessageID'), + dom.th('Account'), + dom.th('Event'), + dom.th('Extra'), + dom.th('Attempts'), + dom.th('Error'), + dom.th('URL'), + dom.th('Actions'), ), ), - dom.div( - dom.div('Delivery'), - dom.clickbutton('Fail delivery', attr.title('Cause delivery to fail, sending a DSN to the sender.'), async function click(e: MouseEvent) { - e.preventDefault() - if (!window.confirm('Are you sure you want to remove this message? Notifications of delivery failure will be sent (DSNs).')) { - return - } - const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueFail(gatherIDs()))()) - window.alert(''+n+' message(s) updated') - window.location.reload() // todo: only refresh the list - }), - ), - dom.div( - dom.div('Messages'), - dom.clickbutton('Remove', attr.title('Completely remove messages from queue, not sending a DSN.'), async function click(e: MouseEvent) { - e.preventDefault() - if (!window.confirm('Are you sure you want to remove this message? It will be removed completely, no DSN about failure to deliver will be sent.')) { - return - } - const n = await check(e.target! as HTMLButtonElement, (async () => await client.QueueDrop(gatherIDs()))()) - window.alert(''+n+' message(s) updated') - window.location.reload() // todo: only refresh the list - }), + tbody, + dom.tfoot( + dom.tr( + dom.td( + attr.colspan('14'), + dom.clickbutton('Load more', attr.title('Try to load more entries. You can still try to load more entries when at the end of the list, new entries may have been appended since the previous call.'), async function click(e: MouseEvent) { + if (hooks.length === 0) { + sort.LastID = 0 + sort.Last = null + } else { + const last = hooks[hooks.length-1] + sort.LastID = last.ID + if (sort.Field === "Submitted") { + sort.Last = last.Submitted + } else { + sort.Last = last.LastActivity + } + } + tbody.classList.add('loadstart') + const l = await check(e.target! as HTMLButtonElement, client.HookRetiredList(filter, sort)) || [] + hooks.push(...l) + render() + }), + ), + ), ), ), + // Filtering. + filterForm=dom.form( + attr.id('hooksfilter'), // Referenced by input elements in table row. + async function submit(e: SubmitEvent) { + e.preventDefault() + e.stopPropagation() + + filter = { + Max: filter.Max, + IDs: [], + Account: filterAccount.value, + Event: filterEvent.value, + Submitted: filterSubmitted.value, + LastActivity: filterLastActivity.value, + } + sort = { + Field: sortElem.value.startsWith('lastactivity') ? 'LastActivity' : 'Submitted', + LastID: 0, + Last: null, + Asc: sortElem.value.endsWith('asc'), + } + tbody.classList.add('loadstart') + hooks = await check({disabled: false}, client.HookRetiredList(filter, sort)) || [] + render() + }, + ), ) } @@ -3218,6 +4145,12 @@ const init = async () => { await domainDNSRecords(t[1]) } else if (h === 'queue') { await queueList() + } else if (h === 'queue/retired') { + await retiredList() + } else if (h === 'webhookqueue') { + await hooksList() + } else if (h === 'webhookqueue/retired') { + await hooksRetiredList() } else if (h === 'tlsrpt') { await tlsrptIndex() } else if (h === 'tlsrpt/reports') { diff --git a/webadmin/admin_test.go b/webadmin/admin_test.go index 1ccf94d..6c5554e 100644 --- a/webadmin/admin_test.go +++ b/webadmin/admin_test.go @@ -12,6 +12,7 @@ import ( "net/http/httptest" "os" "path/filepath" + "reflect" "runtime/debug" "strings" "testing" @@ -25,6 +26,7 @@ import ( "github.com/mjl-/mox/dns" "github.com/mjl-/mox/mlog" "github.com/mjl-/mox/mox-" + "github.com/mjl-/mox/queue" "github.com/mjl-/mox/store" "github.com/mjl-/mox/webauth" ) @@ -64,6 +66,13 @@ func tcheck(t *testing.T, err error, msg string) { } } +func tcompare(t *testing.T, got, expect any) { + t.Helper() + if !reflect.DeepEqual(got, expect) { + t.Fatalf("got:\n%#v\nexpected:\n%#v", got, expect) + } +} + func readBody(r io.Reader) string { buf, err := io.ReadAll(r) if err != nil { @@ -200,6 +209,30 @@ func TestAdminAuth(t *testing.T) { api.Logout(ctx) tneedErrorCode(t, "server:error", func() { api.Logout(ctx) }) + + err = queue.Init() + tcheck(t, err, "queue init") + + mrl := api.RetiredList(ctxbg, queue.RetiredFilter{}, queue.RetiredSort{}) + tcompare(t, len(mrl), 0) + + n := api.HookQueueSize(ctxbg) + tcompare(t, n, 0) + + hl := api.HookList(ctxbg, queue.HookFilter{}, queue.HookSort{}) + tcompare(t, len(hl), 0) + + n = api.HookNextAttemptSet(ctxbg, queue.HookFilter{}, 0) + tcompare(t, n, 0) + + n = api.HookNextAttemptAdd(ctxbg, queue.HookFilter{}, 0) + tcompare(t, n, 0) + + hrl := api.HookRetiredList(ctxbg, queue.HookRetiredFilter{}, queue.HookRetiredSort{}) + tcompare(t, len(hrl), 0) + + n = api.HookCancel(ctxbg, queue.HookFilter{}) + tcompare(t, n, 0) } func TestCheckDomain(t *testing.T) { diff --git a/webadmin/api.json b/webadmin/api.json index e210437..956d877 100644 --- a/webadmin/api.json +++ b/webadmin/api.json @@ -741,6 +741,12 @@ "Typewords": [ "Filter" ] + }, + { + "Name": "sort", + "Typewords": [ + "Sort" + ] } ], "Returns": [ @@ -924,6 +930,172 @@ } ] }, + { + "Name": "RetiredList", + "Docs": "RetiredList returns messages retired from the queue (delivery could\nhave succeeded or failed).", + "Params": [ + { + "Name": "filter", + "Typewords": [ + "RetiredFilter" + ] + }, + { + "Name": "sort", + "Typewords": [ + "RetiredSort" + ] + } + ], + "Returns": [ + { + "Name": "r0", + "Typewords": [ + "[]", + "MsgRetired" + ] + } + ] + }, + { + "Name": "HookQueueSize", + "Docs": "HookQueueSize returns the number of webhooks still to be delivered.", + "Params": [], + "Returns": [ + { + "Name": "r0", + "Typewords": [ + "int32" + ] + } + ] + }, + { + "Name": "HookList", + "Docs": "HookList lists webhooks still to be delivered.", + "Params": [ + { + "Name": "filter", + "Typewords": [ + "HookFilter" + ] + }, + { + "Name": "sort", + "Typewords": [ + "HookSort" + ] + } + ], + "Returns": [ + { + "Name": "r0", + "Typewords": [ + "[]", + "Hook" + ] + } + ] + }, + { + "Name": "HookNextAttemptSet", + "Docs": "HookNextAttemptSet sets a new time for next delivery attempt of matching\nhooks from the queue.", + "Params": [ + { + "Name": "filter", + "Typewords": [ + "HookFilter" + ] + }, + { + "Name": "minutes", + "Typewords": [ + "int32" + ] + } + ], + "Returns": [ + { + "Name": "affected", + "Typewords": [ + "int32" + ] + } + ] + }, + { + "Name": "HookNextAttemptAdd", + "Docs": "HookNextAttemptAdd adds a duration to the time of next delivery attempt of\nmatching hooks from the queue.", + "Params": [ + { + "Name": "filter", + "Typewords": [ + "HookFilter" + ] + }, + { + "Name": "minutes", + "Typewords": [ + "int32" + ] + } + ], + "Returns": [ + { + "Name": "affected", + "Typewords": [ + "int32" + ] + } + ] + }, + { + "Name": "HookRetiredList", + "Docs": "HookRetiredList lists retired webhooks.", + "Params": [ + { + "Name": "filter", + "Typewords": [ + "HookRetiredFilter" + ] + }, + { + "Name": "sort", + "Typewords": [ + "HookRetiredSort" + ] + } + ], + "Returns": [ + { + "Name": "r0", + "Typewords": [ + "[]", + "HookRetired" + ] + } + ] + }, + { + "Name": "HookCancel", + "Docs": "HookCancel prevents further delivery attempts of matching webhooks.", + "Params": [ + { + "Name": "filter", + "Typewords": [ + "HookFilter" + ] + } + ], + "Returns": [ + { + "Name": "affected", + "Typewords": [ + "int32" + ] + } + ] + }, { "Name": "LogLevels", "Docs": "LogLevels returns the current log levels.", @@ -2626,6 +2798,44 @@ "Name": "Account", "Docs": "", "Fields": [ + { + "Name": "OutgoingWebhook", + "Docs": "", + "Typewords": [ + "nullable", + "OutgoingWebhook" + ] + }, + { + "Name": "IncomingWebhook", + "Docs": "", + "Typewords": [ + "nullable", + "IncomingWebhook" + ] + }, + { + "Name": "FromIDLoginAddresses", + "Docs": "", + "Typewords": [ + "[]", + "string" + ] + }, + { + "Name": "KeepRetiredMessagePeriod", + "Docs": "", + "Typewords": [ + "int64" + ] + }, + { + "Name": "KeepRetiredWebhookPeriod", + "Docs": "", + "Typewords": [ + "int64" + ] + }, { "Name": "Domain", "Docs": "", @@ -2736,6 +2946,54 @@ } ] }, + { + "Name": "OutgoingWebhook", + "Docs": "", + "Fields": [ + { + "Name": "URL", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Authorization", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Events", + "Docs": "", + "Typewords": [ + "[]", + "string" + ] + } + ] + }, + { + "Name": "IncomingWebhook", + "Docs": "", + "Fields": [ + { + "Name": "URL", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Authorization", + "Docs": "", + "Typewords": [ + "string" + ] + } + ] + }, { "Name": "Destination", "Docs": "", @@ -3968,6 +4226,13 @@ "Name": "Filter", "Docs": "Filter filters messages to list or operate on. Used by admin web interface\nand cli.\n\nOnly non-empty/non-zero values are applied to the filter. Leaving all fields\nempty/zero matches all messages.", "Fields": [ + { + "Name": "Max", + "Docs": "", + "Typewords": [ + "int32" + ] + }, { "Name": "IDs", "Docs": "", @@ -4029,6 +4294,40 @@ } ] }, + { + "Name": "Sort", + "Docs": "", + "Fields": [ + { + "Name": "Field", + "Docs": "\"Queued\" or \"NextAttempt\"/\"\".", + "Typewords": [ + "string" + ] + }, + { + "Name": "LastID", + "Docs": "If \u003e 0, we return objects beyond this, less/greater depending on Asc.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Last", + "Docs": "Value of Field for last object. Must be set iff LastID is set.", + "Typewords": [ + "any" + ] + }, + { + "Name": "Asc", + "Docs": "Ascending, or descending.", + "Typewords": [ + "bool" + ] + } + ] + }, { "Name": "Msg", "Docs": "Msg is a message in the queue.\n\nUse MakeMsg to make a message with fields that Add needs. Add will further set\nqueueing related fields.", @@ -4089,6 +4388,404 @@ "string" ] }, + { + "Name": "FromID", + "Docs": "For transactional messages, used to match later DSNs.", + "Typewords": [ + "string" + ] + }, + { + "Name": "RecipientLocalpart", + "Docs": "Typically a remote user and domain.", + "Typewords": [ + "Localpart" + ] + }, + { + "Name": "RecipientDomain", + "Docs": "", + "Typewords": [ + "IPDomain" + ] + }, + { + "Name": "RecipientDomainStr", + "Docs": "For filtering, unicode domain. Can also contain ip enclosed in [].", + "Typewords": [ + "string" + ] + }, + { + "Name": "Attempts", + "Docs": "Next attempt is based on last attempt and exponential back off based on attempts.", + "Typewords": [ + "int32" + ] + }, + { + "Name": "MaxAttempts", + "Docs": "Max number of attempts before giving up. If 0, then the default of 8 attempts is used instead.", + "Typewords": [ + "int32" + ] + }, + { + "Name": "DialedIPs", + "Docs": "For each host, the IPs that were dialed. Used for IP selection for later attempts.", + "Typewords": [ + "{}", + "[]", + "IP" + ] + }, + { + "Name": "NextAttempt", + "Docs": "For scheduling.", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "LastAttempt", + "Docs": "", + "Typewords": [ + "nullable", + "timestamp" + ] + }, + { + "Name": "Results", + "Docs": "", + "Typewords": [ + "[]", + "MsgResult" + ] + }, + { + "Name": "Has8bit", + "Docs": "Whether message contains bytes with high bit set, determines whether 8BITMIME SMTP extension is needed.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "SMTPUTF8", + "Docs": "Whether message requires use of SMTPUTF8.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "IsDMARCReport", + "Docs": "Delivery failures for DMARC reports are handled differently.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "IsTLSReport", + "Docs": "Delivery failures for TLS reports are handled differently.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "Size", + "Docs": "Full size of message, combined MsgPrefix with contents of message file.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "MessageID", + "Docs": "Message-ID header, including \u003c\u003e. Used when composing a DSN, in its References header.", + "Typewords": [ + "string" + ] + }, + { + "Name": "MsgPrefix", + "Docs": "Data to send before the contents from the file, typically with headers like DKIM-Signature.", + "Typewords": [ + "[]", + "uint8" + ] + }, + { + "Name": "Subject", + "Docs": "For context about delivery.", + "Typewords": [ + "string" + ] + }, + { + "Name": "DSNUTF8", + "Docs": "If set, this message is a DSN and this is a version using utf-8, for the case the remote MTA supports smtputf8. In this case, Size and MsgPrefix are not relevant.", + "Typewords": [ + "[]", + "uint8" + ] + }, + { + "Name": "Transport", + "Docs": "If non-empty, the transport to use for this message. Can be set through cli or admin interface. If empty (the default for a submitted message), regular routing rules apply.", + "Typewords": [ + "string" + ] + }, + { + "Name": "RequireTLS", + "Docs": "RequireTLS influences TLS verification during delivery. If nil, the recipient domain policy is followed (MTA-STS and/or DANE), falling back to optional opportunistic non-verified STARTTLS. If RequireTLS is true (through SMTP REQUIRETLS extension or webmail submit), MTA-STS or DANE is required, as well as REQUIRETLS support by the next hop server. If RequireTLS is false (through messag header \"TLS-Required: No\"), the recipient domain's policy is ignored if it does not lead to a successful TLS connection, i.e. falling back to SMTP delivery with unverified STARTTLS or plain text.", + "Typewords": [ + "nullable", + "bool" + ] + }, + { + "Name": "FutureReleaseRequest", + "Docs": "For DSNs, where the original FUTURERELEASE value must be included as per-message field. This field should be of the form \"for;\" plus interval, or \"until;\" plus utc date-time.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Extra", + "Docs": "Extra information, for transactional email.", + "Typewords": [ + "{}", + "string" + ] + } + ] + }, + { + "Name": "IPDomain", + "Docs": "IPDomain is an ip address, a domain, or empty.", + "Fields": [ + { + "Name": "IP", + "Docs": "", + "Typewords": [ + "IP" + ] + }, + { + "Name": "Domain", + "Docs": "", + "Typewords": [ + "Domain" + ] + } + ] + }, + { + "Name": "MsgResult", + "Docs": "MsgResult is the result (or work in progress) of a delivery attempt.", + "Fields": [ + { + "Name": "Start", + "Docs": "", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "Duration", + "Docs": "", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Success", + "Docs": "", + "Typewords": [ + "bool" + ] + }, + { + "Name": "Code", + "Docs": "", + "Typewords": [ + "int32" + ] + }, + { + "Name": "Secode", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Error", + "Docs": "", + "Typewords": [ + "string" + ] + } + ] + }, + { + "Name": "RetiredFilter", + "Docs": "RetiredFilter filters messages to list or operate on. Used by admin web interface\nand cli.\n\nOnly non-empty/non-zero values are applied to the filter. Leaving all fields\nempty/zero matches all messages.", + "Fields": [ + { + "Name": "Max", + "Docs": "", + "Typewords": [ + "int32" + ] + }, + { + "Name": "IDs", + "Docs": "", + "Typewords": [ + "[]", + "int64" + ] + }, + { + "Name": "Account", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "From", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "To", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Submitted", + "Docs": "Whether submitted before/after a time relative to now. \"\u003e$duration\" or \"\u003c$duration\", also with \"now\" for duration.", + "Typewords": [ + "string" + ] + }, + { + "Name": "LastActivity", + "Docs": "\"\u003e$duration\" or \"\u003c$duration\", also with \"now\" for duration.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Transport", + "Docs": "", + "Typewords": [ + "nullable", + "string" + ] + }, + { + "Name": "Success", + "Docs": "", + "Typewords": [ + "nullable", + "bool" + ] + } + ] + }, + { + "Name": "RetiredSort", + "Docs": "", + "Fields": [ + { + "Name": "Field", + "Docs": "\"Queued\" or \"LastActivity\"/\"\".", + "Typewords": [ + "string" + ] + }, + { + "Name": "LastID", + "Docs": "If \u003e 0, we return objects beyond this, less/greater depending on Asc.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Last", + "Docs": "Value of Field for last object. Must be set iff LastID is set.", + "Typewords": [ + "any" + ] + }, + { + "Name": "Asc", + "Docs": "Ascending, or descending.", + "Typewords": [ + "bool" + ] + } + ] + }, + { + "Name": "MsgRetired", + "Docs": "MsgRetired is a message for which delivery completed, either successful,\nfailed/canceled. Retired messages are only stored if so configured, and will be\ncleaned up after the configured period.", + "Fields": [ + { + "Name": "ID", + "Docs": "Same ID as it was as Msg.ID.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "BaseID", + "Docs": "", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Queued", + "Docs": "", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "SenderAccount", + "Docs": "Failures are delivered back to this local account. Also used for routing.", + "Typewords": [ + "string" + ] + }, + { + "Name": "SenderLocalpart", + "Docs": "Should be a local user and domain.", + "Typewords": [ + "Localpart" + ] + }, + { + "Name": "SenderDomainStr", + "Docs": "For filtering, unicode.", + "Typewords": [ + "string" + ] + }, + { + "Name": "FromID", + "Docs": "Used to match DSNs.", + "Typewords": [ + "string" + ] + }, { "Name": "RecipientLocalpart", "Docs": "Typically a remote user and domain.", @@ -4133,13 +4830,6 @@ "IP" ] }, - { - "Name": "NextAttempt", - "Docs": "For scheduling.", - "Typewords": [ - "timestamp" - ] - }, { "Name": "LastAttempt", "Docs": "", @@ -4149,10 +4839,11 @@ ] }, { - "Name": "LastError", + "Name": "Results", "Docs": "", "Typewords": [ - "string" + "[]", + "MsgResult" ] }, { @@ -4198,31 +4889,22 @@ ] }, { - "Name": "MsgPrefix", - "Docs": "", + "Name": "Subject", + "Docs": "For context about delivery.", "Typewords": [ - "[]", - "uint8" - ] - }, - { - "Name": "DSNUTF8", - "Docs": "If set, this message is a DSN and this is a version using utf-8, for the case the remote MTA supports smtputf8. In this case, Size and MsgPrefix are not relevant.", - "Typewords": [ - "[]", - "uint8" + "string" ] }, { "Name": "Transport", - "Docs": "If non-empty, the transport to use for this message. Can be set through cli or admin interface. If empty (the default for a submitted message), regular routing rules apply.", + "Docs": "", "Typewords": [ "string" ] }, { "Name": "RequireTLS", - "Docs": "RequireTLS influences TLS verification during delivery. If nil, the recipient domain policy is followed (MTA-STS and/or DANE), falling back to optional opportunistic non-verified STARTTLS. If RequireTLS is true (through SMTP REQUIRETLS extension or webmail submit), MTA-STS or DANE is required, as well as REQUIRETLS support by the next hop server. If RequireTLS is false (through messag header \"TLS-Required: No\"), the recipient domain's policy is ignored if it does not lead to a successful TLS connection, i.e. falling back to SMTP delivery with unverified STARTTLS or plain text.", + "Docs": "", "Typewords": [ "nullable", "bool" @@ -4230,7 +4912,92 @@ }, { "Name": "FutureReleaseRequest", - "Docs": "For DSNs, where the original FUTURERELEASE value must be included as per-message field. This field should be of the form \"for;\" plus interval, or \"until;\" plus utc date-time.", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Extra", + "Docs": "Extra information, for transactional email.", + "Typewords": [ + "{}", + "string" + ] + }, + { + "Name": "LastActivity", + "Docs": "", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "RecipientAddress", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Success", + "Docs": "Whether delivery to next hop succeeded.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "KeepUntil", + "Docs": "", + "Typewords": [ + "timestamp" + ] + } + ] + }, + { + "Name": "HookFilter", + "Docs": "HookFilter filters messages to list or operate on. Used by admin web interface\nand cli.\n\nOnly non-empty/non-zero values are applied to the filter. Leaving all fields\nempty/zero matches all hooks.", + "Fields": [ + { + "Name": "Max", + "Docs": "", + "Typewords": [ + "int32" + ] + }, + { + "Name": "IDs", + "Docs": "", + "Typewords": [ + "[]", + "int64" + ] + }, + { + "Name": "Account", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Submitted", + "Docs": "Whether submitted before/after a time relative to now. \"\u003e$duration\" or \"\u003c$duration\", also with \"now\" for duration.", + "Typewords": [ + "string" + ] + }, + { + "Name": "NextAttempt", + "Docs": "\"\u003e$duration\" or \"\u003c$duration\", also with \"now\" for duration.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Event", + "Docs": "Including \"incoming\".", "Typewords": [ "string" ] @@ -4238,21 +5005,434 @@ ] }, { - "Name": "IPDomain", - "Docs": "IPDomain is an ip address, a domain, or empty.", + "Name": "HookSort", + "Docs": "", "Fields": [ { - "Name": "IP", - "Docs": "", + "Name": "Field", + "Docs": "\"Queued\" or \"NextAttempt\"/\"\".", "Typewords": [ - "IP" + "string" ] }, { - "Name": "Domain", + "Name": "LastID", + "Docs": "If \u003e 0, we return objects beyond this, less/greater depending on Asc.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Last", + "Docs": "Value of Field for last object. Must be set iff LastID is set.", + "Typewords": [ + "any" + ] + }, + { + "Name": "Asc", + "Docs": "Ascending, or descending.", + "Typewords": [ + "bool" + ] + } + ] + }, + { + "Name": "Hook", + "Docs": "Hook is a webhook call about a delivery. We'll try delivering with backoff until we succeed or fail.", + "Fields": [ + { + "Name": "ID", "Docs": "", "Typewords": [ - "Domain" + "int64" + ] + }, + { + "Name": "QueueMsgID", + "Docs": "Original queue Msg/MsgRetired ID. Zero for hooks for incoming messages.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "FromID", + "Docs": "As generated by us and returned in webapi call. Can be empty, for incoming messages to our base address.", + "Typewords": [ + "string" + ] + }, + { + "Name": "MessageID", + "Docs": "Of outgoing or incoming messages. Includes \u003c\u003e.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Subject", + "Docs": "Subject of original outgoing message, or of incoming message.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Extra", + "Docs": "From submitted message.", + "Typewords": [ + "{}", + "string" + ] + }, + { + "Name": "Account", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "URL", + "Docs": "Taken from config when webhook is scheduled.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Authorization", + "Docs": "Optional value for authorization header to include in HTTP request.", + "Typewords": [ + "string" + ] + }, + { + "Name": "IsIncoming", + "Docs": "", + "Typewords": [ + "bool" + ] + }, + { + "Name": "OutgoingEvent", + "Docs": "Empty string if not outgoing.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Payload", + "Docs": "JSON data to be submitted.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Submitted", + "Docs": "", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "Attempts", + "Docs": "", + "Typewords": [ + "int32" + ] + }, + { + "Name": "NextAttempt", + "Docs": "Index for fast scheduling.", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "Results", + "Docs": "", + "Typewords": [ + "[]", + "HookResult" + ] + } + ] + }, + { + "Name": "HookResult", + "Docs": "HookResult is the result of a single attempt to deliver a webhook.", + "Fields": [ + { + "Name": "Start", + "Docs": "", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "Duration", + "Docs": "", + "Typewords": [ + "int64" + ] + }, + { + "Name": "URL", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Success", + "Docs": "", + "Typewords": [ + "bool" + ] + }, + { + "Name": "Code", + "Docs": "eg 200, 404, 500. 2xx implies success.", + "Typewords": [ + "int32" + ] + }, + { + "Name": "Error", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Response", + "Docs": "Max 512 bytes of HTTP response body.", + "Typewords": [ + "string" + ] + } + ] + }, + { + "Name": "HookRetiredFilter", + "Docs": "HookRetiredFilter filters messages to list or operate on. Used by admin web interface\nand cli.\n\nOnly non-empty/non-zero values are applied to the filter. Leaving all fields\nempty/zero matches all hooks.", + "Fields": [ + { + "Name": "Max", + "Docs": "", + "Typewords": [ + "int32" + ] + }, + { + "Name": "IDs", + "Docs": "", + "Typewords": [ + "[]", + "int64" + ] + }, + { + "Name": "Account", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Submitted", + "Docs": "Whether submitted before/after a time relative to now. \"\u003e$duration\" or \"\u003c$duration\", also with \"now\" for duration.", + "Typewords": [ + "string" + ] + }, + { + "Name": "LastActivity", + "Docs": "\"\u003e$duration\" or \"\u003c$duration\", also with \"now\" for duration.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Event", + "Docs": "Including \"incoming\".", + "Typewords": [ + "string" + ] + } + ] + }, + { + "Name": "HookRetiredSort", + "Docs": "", + "Fields": [ + { + "Name": "Field", + "Docs": "\"Queued\" or \"LastActivity\"/\"\".", + "Typewords": [ + "string" + ] + }, + { + "Name": "LastID", + "Docs": "If \u003e 0, we return objects beyond this, less/greater depending on Asc.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Last", + "Docs": "Value of Field for last object. Must be set iff LastID is set.", + "Typewords": [ + "any" + ] + }, + { + "Name": "Asc", + "Docs": "Ascending, or descending.", + "Typewords": [ + "bool" + ] + } + ] + }, + { + "Name": "HookRetired", + "Docs": "HookRetired is a Hook that was delivered/failed/canceled and kept according\nto the configuration.", + "Fields": [ + { + "Name": "ID", + "Docs": "Same as original Hook.ID.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "QueueMsgID", + "Docs": "Original queue Msg or MsgRetired ID. Zero for hooks for incoming messages.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "FromID", + "Docs": "As generated by us and returned in webapi call. Can be empty, for incoming messages to our base address.", + "Typewords": [ + "string" + ] + }, + { + "Name": "MessageID", + "Docs": "Of outgoing or incoming messages. Includes \u003c\u003e.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Subject", + "Docs": "Subject of original outgoing message, or of incoming message.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Extra", + "Docs": "From submitted message.", + "Typewords": [ + "{}", + "string" + ] + }, + { + "Name": "Account", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "URL", + "Docs": "Taken from config at start of each attempt.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Authorization", + "Docs": "Whether request had authorization without keeping it around.", + "Typewords": [ + "bool" + ] + }, + { + "Name": "IsIncoming", + "Docs": "", + "Typewords": [ + "bool" + ] + }, + { + "Name": "OutgoingEvent", + "Docs": "", + "Typewords": [ + "string" + ] + }, + { + "Name": "Payload", + "Docs": "JSON data submitted.", + "Typewords": [ + "string" + ] + }, + { + "Name": "Submitted", + "Docs": "", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "SupersededByID", + "Docs": "If not 0, a Hook.ID that superseded this one and Done will be true.", + "Typewords": [ + "int64" + ] + }, + { + "Name": "Attempts", + "Docs": "", + "Typewords": [ + "int32" + ] + }, + { + "Name": "Results", + "Docs": "", + "Typewords": [ + "[]", + "HookResult" + ] + }, + { + "Name": "Success", + "Docs": "", + "Typewords": [ + "bool" + ] + }, + { + "Name": "LastActivity", + "Docs": "", + "Typewords": [ + "timestamp" + ] + }, + { + "Name": "KeepUntil", + "Docs": "", + "Typewords": [ + "timestamp" ] } ] diff --git a/webadmin/api.ts b/webadmin/api.ts index 4f2efc4..f22e771 100644 --- a/webadmin/api.ts +++ b/webadmin/api.ts @@ -267,6 +267,11 @@ export interface AutodiscoverSRV { } export interface Account { + OutgoingWebhook?: OutgoingWebhook | null + IncomingWebhook?: IncomingWebhook | null + FromIDLoginAddresses?: string[] | null + KeepRetiredMessagePeriod: number + KeepRetiredWebhookPeriod: number Domain: string Description: string FullName: string @@ -284,6 +289,17 @@ export interface Account { DNSDomain: Domain // Parsed form of Domain. } +export interface OutgoingWebhook { + URL: string + Authorization: string + Events?: string[] | null +} + +export interface IncomingWebhook { + URL: string + Authorization: string +} + export interface Destination { Mailbox: string Rulesets?: Ruleset[] | null @@ -550,6 +566,7 @@ export interface HoldRule { // Only non-empty/non-zero values are applied to the filter. Leaving all fields // empty/zero matches all messages. export interface Filter { + Max: number IDs?: number[] | null Account: string From: string @@ -560,6 +577,13 @@ export interface Filter { Transport?: string | null } +export interface Sort { + Field: string // "Queued" or "NextAttempt"/"". + LastID: number // If > 0, we return objects beyond this, less/greater depending on Asc. + Last: any // Value of Field for last object. Must be set iff LastID is set. + Asc: boolean // Ascending, or descending. +} + // Msg is a message in the queue. // // Use MakeMsg to make a message with fields that Add needs. Add will further set @@ -573,26 +597,29 @@ export interface Msg { SenderLocalpart: Localpart // Should be a local user and domain. SenderDomain: IPDomain SenderDomainStr: string // For filtering, unicode. + FromID: string // For transactional messages, used to match later DSNs. RecipientLocalpart: Localpart // Typically a remote user and domain. RecipientDomain: IPDomain - RecipientDomainStr: string // For filtering, unicode. + RecipientDomainStr: string // For filtering, unicode domain. Can also contain ip enclosed in []. Attempts: number // Next attempt is based on last attempt and exponential back off based on attempts. MaxAttempts: number // Max number of attempts before giving up. If 0, then the default of 8 attempts is used instead. DialedIPs?: { [key: string]: IP[] | null } // For each host, the IPs that were dialed. Used for IP selection for later attempts. NextAttempt: Date // For scheduling. LastAttempt?: Date | null - LastError: string + Results?: MsgResult[] | null Has8bit: boolean // Whether message contains bytes with high bit set, determines whether 8BITMIME SMTP extension is needed. SMTPUTF8: boolean // Whether message requires use of SMTPUTF8. IsDMARCReport: boolean // Delivery failures for DMARC reports are handled differently. IsTLSReport: boolean // Delivery failures for TLS reports are handled differently. Size: number // Full size of message, combined MsgPrefix with contents of message file. - MessageID: string // Used when composing a DSN, in its References header. - MsgPrefix?: string | null + MessageID: string // Message-ID header, including <>. Used when composing a DSN, in its References header. + MsgPrefix?: string | null // Data to send before the contents from the file, typically with headers like DKIM-Signature. + Subject: string // For context about delivery. DSNUTF8?: string | null // If set, this message is a DSN and this is a version using utf-8, for the case the remote MTA supports smtputf8. In this case, Size and MsgPrefix are not relevant. Transport: string // If non-empty, the transport to use for this message. Can be set through cli or admin interface. If empty (the default for a submitted message), regular routing rules apply. RequireTLS?: boolean | null // RequireTLS influences TLS verification during delivery. If nil, the recipient domain policy is followed (MTA-STS and/or DANE), falling back to optional opportunistic non-verified STARTTLS. If RequireTLS is true (through SMTP REQUIRETLS extension or webmail submit), MTA-STS or DANE is required, as well as REQUIRETLS support by the next hop server. If RequireTLS is false (through messag header "TLS-Required: No"), the recipient domain's policy is ignored if it does not lead to a successful TLS connection, i.e. falling back to SMTP delivery with unverified STARTTLS or plain text. FutureReleaseRequest: string // For DSNs, where the original FUTURERELEASE value must be included as per-message field. This field should be of the form "for;" plus interval, or "until;" plus utc date-time. + Extra?: { [key: string]: string } // Extra information, for transactional email. } // IPDomain is an ip address, a domain, or empty. @@ -601,6 +628,173 @@ export interface IPDomain { Domain: Domain } +// MsgResult is the result (or work in progress) of a delivery attempt. +export interface MsgResult { + Start: Date + Duration: number + Success: boolean + Code: number + Secode: string + Error: string +} + +// RetiredFilter filters messages to list or operate on. Used by admin web interface +// and cli. +// +// Only non-empty/non-zero values are applied to the filter. Leaving all fields +// empty/zero matches all messages. +export interface RetiredFilter { + Max: number + IDs?: number[] | null + Account: string + From: string + To: string + Submitted: string // Whether submitted before/after a time relative to now. ">$duration" or "<$duration", also with "now" for duration. + LastActivity: string // ">$duration" or "<$duration", also with "now" for duration. + Transport?: string | null + Success?: boolean | null +} + +export interface RetiredSort { + Field: string // "Queued" or "LastActivity"/"". + LastID: number // If > 0, we return objects beyond this, less/greater depending on Asc. + Last: any // Value of Field for last object. Must be set iff LastID is set. + Asc: boolean // Ascending, or descending. +} + +// MsgRetired is a message for which delivery completed, either successful, +// failed/canceled. Retired messages are only stored if so configured, and will be +// cleaned up after the configured period. +export interface MsgRetired { + ID: number // Same ID as it was as Msg.ID. + BaseID: number + Queued: Date + SenderAccount: string // Failures are delivered back to this local account. Also used for routing. + SenderLocalpart: Localpart // Should be a local user and domain. + SenderDomainStr: string // For filtering, unicode. + FromID: string // Used to match DSNs. + RecipientLocalpart: Localpart // Typically a remote user and domain. + RecipientDomain: IPDomain + RecipientDomainStr: string // For filtering, unicode. + Attempts: number // Next attempt is based on last attempt and exponential back off based on attempts. + MaxAttempts: number // Max number of attempts before giving up. If 0, then the default of 8 attempts is used instead. + DialedIPs?: { [key: string]: IP[] | null } // For each host, the IPs that were dialed. Used for IP selection for later attempts. + LastAttempt?: Date | null + Results?: MsgResult[] | null + Has8bit: boolean // Whether message contains bytes with high bit set, determines whether 8BITMIME SMTP extension is needed. + SMTPUTF8: boolean // Whether message requires use of SMTPUTF8. + IsDMARCReport: boolean // Delivery failures for DMARC reports are handled differently. + IsTLSReport: boolean // Delivery failures for TLS reports are handled differently. + Size: number // Full size of message, combined MsgPrefix with contents of message file. + MessageID: string // Used when composing a DSN, in its References header. + Subject: string // For context about delivery. + Transport: string + RequireTLS?: boolean | null + FutureReleaseRequest: string + Extra?: { [key: string]: string } // Extra information, for transactional email. + LastActivity: Date + RecipientAddress: string + Success: boolean // Whether delivery to next hop succeeded. + KeepUntil: Date +} + +// HookFilter filters messages to list or operate on. Used by admin web interface +// and cli. +// +// Only non-empty/non-zero values are applied to the filter. Leaving all fields +// empty/zero matches all hooks. +export interface HookFilter { + Max: number + IDs?: number[] | null + Account: string + Submitted: string // Whether submitted before/after a time relative to now. ">$duration" or "<$duration", also with "now" for duration. + NextAttempt: string // ">$duration" or "<$duration", also with "now" for duration. + Event: string // Including "incoming". +} + +export interface HookSort { + Field: string // "Queued" or "NextAttempt"/"". + LastID: number // If > 0, we return objects beyond this, less/greater depending on Asc. + Last: any // Value of Field for last object. Must be set iff LastID is set. + Asc: boolean // Ascending, or descending. +} + +// Hook is a webhook call about a delivery. We'll try delivering with backoff until we succeed or fail. +export interface Hook { + ID: number + QueueMsgID: number // Original queue Msg/MsgRetired ID. Zero for hooks for incoming messages. + FromID: string // As generated by us and returned in webapi call. Can be empty, for incoming messages to our base address. + MessageID: string // Of outgoing or incoming messages. Includes <>. + Subject: string // Subject of original outgoing message, or of incoming message. + Extra?: { [key: string]: string } // From submitted message. + Account: string + URL: string // Taken from config when webhook is scheduled. + Authorization: string // Optional value for authorization header to include in HTTP request. + IsIncoming: boolean + OutgoingEvent: string // Empty string if not outgoing. + Payload: string // JSON data to be submitted. + Submitted: Date + Attempts: number + NextAttempt: Date // Index for fast scheduling. + Results?: HookResult[] | null +} + +// HookResult is the result of a single attempt to deliver a webhook. +export interface HookResult { + Start: Date + Duration: number + URL: string + Success: boolean + Code: number // eg 200, 404, 500. 2xx implies success. + Error: string + Response: string // Max 512 bytes of HTTP response body. +} + +// HookRetiredFilter filters messages to list or operate on. Used by admin web interface +// and cli. +// +// Only non-empty/non-zero values are applied to the filter. Leaving all fields +// empty/zero matches all hooks. +export interface HookRetiredFilter { + Max: number + IDs?: number[] | null + Account: string + Submitted: string // Whether submitted before/after a time relative to now. ">$duration" or "<$duration", also with "now" for duration. + LastActivity: string // ">$duration" or "<$duration", also with "now" for duration. + Event: string // Including "incoming". +} + +export interface HookRetiredSort { + Field: string // "Queued" or "LastActivity"/"". + LastID: number // If > 0, we return objects beyond this, less/greater depending on Asc. + Last: any // Value of Field for last object. Must be set iff LastID is set. + Asc: boolean // Ascending, or descending. +} + +// HookRetired is a Hook that was delivered/failed/canceled and kept according +// to the configuration. +export interface HookRetired { + ID: number // Same as original Hook.ID. + QueueMsgID: number // Original queue Msg or MsgRetired ID. Zero for hooks for incoming messages. + FromID: string // As generated by us and returned in webapi call. Can be empty, for incoming messages to our base address. + MessageID: string // Of outgoing or incoming messages. Includes <>. + Subject: string // Subject of original outgoing message, or of incoming message. + Extra?: { [key: string]: string } // From submitted message. + Account: string + URL: string // Taken from config at start of each attempt. + Authorization: boolean // Whether request had authorization without keeping it around. + IsIncoming: boolean + OutgoingEvent: string + Payload: string // JSON data submitted. + Submitted: Date + SupersededByID: number // If not 0, a Hook.ID that superseded this one and Done will be true. + Attempts: number + Results?: HookResult[] | null + Success: boolean + LastActivity: Date + KeepUntil: Date +} + // WebserverConfig is the combination of WebDomainRedirects and WebHandlers // from the domains.conf configuration file. export interface WebserverConfig { @@ -883,7 +1077,7 @@ export type Localpart = string // be an IPv4 address. export type IP = string -export const structTypes: {[typename: string]: boolean} = {"Account":true,"AuthResults":true,"AutoconfCheckResult":true,"AutodiscoverCheckResult":true,"AutodiscoverSRV":true,"AutomaticJunkFlags":true,"CheckResult":true,"ClientConfigs":true,"ClientConfigsEntry":true,"DANECheckResult":true,"DKIMAuthResult":true,"DKIMCheckResult":true,"DKIMRecord":true,"DMARCCheckResult":true,"DMARCRecord":true,"DMARCSummary":true,"DNSSECResult":true,"DateRange":true,"Destination":true,"Directive":true,"Domain":true,"DomainFeedback":true,"Evaluation":true,"EvaluationStat":true,"Extension":true,"FailureDetails":true,"Filter":true,"HoldRule":true,"IPDomain":true,"IPRevCheckResult":true,"Identifiers":true,"JunkFilter":true,"MTASTSCheckResult":true,"MTASTSRecord":true,"MX":true,"MXCheckResult":true,"Modifier":true,"Msg":true,"Pair":true,"Policy":true,"PolicyEvaluated":true,"PolicyOverrideReason":true,"PolicyPublished":true,"PolicyRecord":true,"Record":true,"Report":true,"ReportMetadata":true,"ReportRecord":true,"Result":true,"ResultPolicy":true,"Reverse":true,"Route":true,"Row":true,"Ruleset":true,"SMTPAuth":true,"SPFAuthResult":true,"SPFCheckResult":true,"SPFRecord":true,"SRV":true,"SRVConfCheckResult":true,"STSMX":true,"SubjectPass":true,"Summary":true,"SuppressAddress":true,"TLSCheckResult":true,"TLSRPTCheckResult":true,"TLSRPTDateRange":true,"TLSRPTRecord":true,"TLSRPTSummary":true,"TLSRPTSuppressAddress":true,"TLSReportRecord":true,"TLSResult":true,"Transport":true,"TransportDirect":true,"TransportSMTP":true,"TransportSocks":true,"URI":true,"WebForward":true,"WebHandler":true,"WebRedirect":true,"WebStatic":true,"WebserverConfig":true} +export const structTypes: {[typename: string]: boolean} = {"Account":true,"AuthResults":true,"AutoconfCheckResult":true,"AutodiscoverCheckResult":true,"AutodiscoverSRV":true,"AutomaticJunkFlags":true,"CheckResult":true,"ClientConfigs":true,"ClientConfigsEntry":true,"DANECheckResult":true,"DKIMAuthResult":true,"DKIMCheckResult":true,"DKIMRecord":true,"DMARCCheckResult":true,"DMARCRecord":true,"DMARCSummary":true,"DNSSECResult":true,"DateRange":true,"Destination":true,"Directive":true,"Domain":true,"DomainFeedback":true,"Evaluation":true,"EvaluationStat":true,"Extension":true,"FailureDetails":true,"Filter":true,"HoldRule":true,"Hook":true,"HookFilter":true,"HookResult":true,"HookRetired":true,"HookRetiredFilter":true,"HookRetiredSort":true,"HookSort":true,"IPDomain":true,"IPRevCheckResult":true,"Identifiers":true,"IncomingWebhook":true,"JunkFilter":true,"MTASTSCheckResult":true,"MTASTSRecord":true,"MX":true,"MXCheckResult":true,"Modifier":true,"Msg":true,"MsgResult":true,"MsgRetired":true,"OutgoingWebhook":true,"Pair":true,"Policy":true,"PolicyEvaluated":true,"PolicyOverrideReason":true,"PolicyPublished":true,"PolicyRecord":true,"Record":true,"Report":true,"ReportMetadata":true,"ReportRecord":true,"Result":true,"ResultPolicy":true,"RetiredFilter":true,"RetiredSort":true,"Reverse":true,"Route":true,"Row":true,"Ruleset":true,"SMTPAuth":true,"SPFAuthResult":true,"SPFCheckResult":true,"SPFRecord":true,"SRV":true,"SRVConfCheckResult":true,"STSMX":true,"Sort":true,"SubjectPass":true,"Summary":true,"SuppressAddress":true,"TLSCheckResult":true,"TLSRPTCheckResult":true,"TLSRPTDateRange":true,"TLSRPTRecord":true,"TLSRPTSummary":true,"TLSRPTSuppressAddress":true,"TLSReportRecord":true,"TLSResult":true,"Transport":true,"TransportDirect":true,"TransportSMTP":true,"TransportSocks":true,"URI":true,"WebForward":true,"WebHandler":true,"WebRedirect":true,"WebStatic":true,"WebserverConfig":true} export const stringsTypes: {[typename: string]: boolean} = {"Align":true,"Alignment":true,"CSRFToken":true,"DKIMResult":true,"DMARCPolicy":true,"DMARCResult":true,"Disposition":true,"IP":true,"Localpart":true,"Mode":true,"PolicyOverride":true,"PolicyType":true,"RUA":true,"ResultType":true,"SPFDomainScope":true,"SPFResult":true} export const intsTypes: {[typename: string]: boolean} = {} export const types: TypenameMap = { @@ -918,7 +1112,9 @@ export const types: TypenameMap = { "AutoconfCheckResult": {"Name":"AutoconfCheckResult","Docs":"","Fields":[{"Name":"ClientSettingsDomainIPs","Docs":"","Typewords":["[]","string"]},{"Name":"IPs","Docs":"","Typewords":["[]","string"]},{"Name":"Errors","Docs":"","Typewords":["[]","string"]},{"Name":"Warnings","Docs":"","Typewords":["[]","string"]},{"Name":"Instructions","Docs":"","Typewords":["[]","string"]}]}, "AutodiscoverCheckResult": {"Name":"AutodiscoverCheckResult","Docs":"","Fields":[{"Name":"Records","Docs":"","Typewords":["[]","AutodiscoverSRV"]},{"Name":"Errors","Docs":"","Typewords":["[]","string"]},{"Name":"Warnings","Docs":"","Typewords":["[]","string"]},{"Name":"Instructions","Docs":"","Typewords":["[]","string"]}]}, "AutodiscoverSRV": {"Name":"AutodiscoverSRV","Docs":"","Fields":[{"Name":"Target","Docs":"","Typewords":["string"]},{"Name":"Port","Docs":"","Typewords":["uint16"]},{"Name":"Priority","Docs":"","Typewords":["uint16"]},{"Name":"Weight","Docs":"","Typewords":["uint16"]},{"Name":"IPs","Docs":"","Typewords":["[]","string"]}]}, - "Account": {"Name":"Account","Docs":"","Fields":[{"Name":"Domain","Docs":"","Typewords":["string"]},{"Name":"Description","Docs":"","Typewords":["string"]},{"Name":"FullName","Docs":"","Typewords":["string"]},{"Name":"Destinations","Docs":"","Typewords":["{}","Destination"]},{"Name":"SubjectPass","Docs":"","Typewords":["SubjectPass"]},{"Name":"QuotaMessageSize","Docs":"","Typewords":["int64"]},{"Name":"RejectsMailbox","Docs":"","Typewords":["string"]},{"Name":"KeepRejects","Docs":"","Typewords":["bool"]},{"Name":"AutomaticJunkFlags","Docs":"","Typewords":["AutomaticJunkFlags"]},{"Name":"JunkFilter","Docs":"","Typewords":["nullable","JunkFilter"]},{"Name":"MaxOutgoingMessagesPerDay","Docs":"","Typewords":["int32"]},{"Name":"MaxFirstTimeRecipientsPerDay","Docs":"","Typewords":["int32"]},{"Name":"NoFirstTimeSenderDelay","Docs":"","Typewords":["bool"]},{"Name":"Routes","Docs":"","Typewords":["[]","Route"]},{"Name":"DNSDomain","Docs":"","Typewords":["Domain"]}]}, + "Account": {"Name":"Account","Docs":"","Fields":[{"Name":"OutgoingWebhook","Docs":"","Typewords":["nullable","OutgoingWebhook"]},{"Name":"IncomingWebhook","Docs":"","Typewords":["nullable","IncomingWebhook"]},{"Name":"FromIDLoginAddresses","Docs":"","Typewords":["[]","string"]},{"Name":"KeepRetiredMessagePeriod","Docs":"","Typewords":["int64"]},{"Name":"KeepRetiredWebhookPeriod","Docs":"","Typewords":["int64"]},{"Name":"Domain","Docs":"","Typewords":["string"]},{"Name":"Description","Docs":"","Typewords":["string"]},{"Name":"FullName","Docs":"","Typewords":["string"]},{"Name":"Destinations","Docs":"","Typewords":["{}","Destination"]},{"Name":"SubjectPass","Docs":"","Typewords":["SubjectPass"]},{"Name":"QuotaMessageSize","Docs":"","Typewords":["int64"]},{"Name":"RejectsMailbox","Docs":"","Typewords":["string"]},{"Name":"KeepRejects","Docs":"","Typewords":["bool"]},{"Name":"AutomaticJunkFlags","Docs":"","Typewords":["AutomaticJunkFlags"]},{"Name":"JunkFilter","Docs":"","Typewords":["nullable","JunkFilter"]},{"Name":"MaxOutgoingMessagesPerDay","Docs":"","Typewords":["int32"]},{"Name":"MaxFirstTimeRecipientsPerDay","Docs":"","Typewords":["int32"]},{"Name":"NoFirstTimeSenderDelay","Docs":"","Typewords":["bool"]},{"Name":"Routes","Docs":"","Typewords":["[]","Route"]},{"Name":"DNSDomain","Docs":"","Typewords":["Domain"]}]}, + "OutgoingWebhook": {"Name":"OutgoingWebhook","Docs":"","Fields":[{"Name":"URL","Docs":"","Typewords":["string"]},{"Name":"Authorization","Docs":"","Typewords":["string"]},{"Name":"Events","Docs":"","Typewords":["[]","string"]}]}, + "IncomingWebhook": {"Name":"IncomingWebhook","Docs":"","Fields":[{"Name":"URL","Docs":"","Typewords":["string"]},{"Name":"Authorization","Docs":"","Typewords":["string"]}]}, "Destination": {"Name":"Destination","Docs":"","Fields":[{"Name":"Mailbox","Docs":"","Typewords":["string"]},{"Name":"Rulesets","Docs":"","Typewords":["[]","Ruleset"]},{"Name":"FullName","Docs":"","Typewords":["string"]}]}, "Ruleset": {"Name":"Ruleset","Docs":"","Fields":[{"Name":"SMTPMailFromRegexp","Docs":"","Typewords":["string"]},{"Name":"VerifiedDomain","Docs":"","Typewords":["string"]},{"Name":"HeadersRegexp","Docs":"","Typewords":["{}","string"]},{"Name":"IsForward","Docs":"","Typewords":["bool"]},{"Name":"ListAllowDomain","Docs":"","Typewords":["string"]},{"Name":"AcceptRejectsToMailbox","Docs":"","Typewords":["string"]},{"Name":"Mailbox","Docs":"","Typewords":["string"]},{"Name":"VerifiedDNSDomain","Docs":"","Typewords":["Domain"]},{"Name":"ListAllowDNSDomain","Docs":"","Typewords":["Domain"]}]}, "SubjectPass": {"Name":"SubjectPass","Docs":"","Fields":[{"Name":"Period","Docs":"","Typewords":["int64"]}]}, @@ -951,9 +1147,21 @@ export const types: TypenameMap = { "ClientConfigs": {"Name":"ClientConfigs","Docs":"","Fields":[{"Name":"Entries","Docs":"","Typewords":["[]","ClientConfigsEntry"]}]}, "ClientConfigsEntry": {"Name":"ClientConfigsEntry","Docs":"","Fields":[{"Name":"Protocol","Docs":"","Typewords":["string"]},{"Name":"Host","Docs":"","Typewords":["Domain"]},{"Name":"Port","Docs":"","Typewords":["int32"]},{"Name":"Listener","Docs":"","Typewords":["string"]},{"Name":"Note","Docs":"","Typewords":["string"]}]}, "HoldRule": {"Name":"HoldRule","Docs":"","Fields":[{"Name":"ID","Docs":"","Typewords":["int64"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"SenderDomain","Docs":"","Typewords":["Domain"]},{"Name":"RecipientDomain","Docs":"","Typewords":["Domain"]},{"Name":"SenderDomainStr","Docs":"","Typewords":["string"]},{"Name":"RecipientDomainStr","Docs":"","Typewords":["string"]}]}, - "Filter": {"Name":"Filter","Docs":"","Fields":[{"Name":"IDs","Docs":"","Typewords":["[]","int64"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"From","Docs":"","Typewords":["string"]},{"Name":"To","Docs":"","Typewords":["string"]},{"Name":"Hold","Docs":"","Typewords":["nullable","bool"]},{"Name":"Submitted","Docs":"","Typewords":["string"]},{"Name":"NextAttempt","Docs":"","Typewords":["string"]},{"Name":"Transport","Docs":"","Typewords":["nullable","string"]}]}, - "Msg": {"Name":"Msg","Docs":"","Fields":[{"Name":"ID","Docs":"","Typewords":["int64"]},{"Name":"BaseID","Docs":"","Typewords":["int64"]},{"Name":"Queued","Docs":"","Typewords":["timestamp"]},{"Name":"Hold","Docs":"","Typewords":["bool"]},{"Name":"SenderAccount","Docs":"","Typewords":["string"]},{"Name":"SenderLocalpart","Docs":"","Typewords":["Localpart"]},{"Name":"SenderDomain","Docs":"","Typewords":["IPDomain"]},{"Name":"SenderDomainStr","Docs":"","Typewords":["string"]},{"Name":"RecipientLocalpart","Docs":"","Typewords":["Localpart"]},{"Name":"RecipientDomain","Docs":"","Typewords":["IPDomain"]},{"Name":"RecipientDomainStr","Docs":"","Typewords":["string"]},{"Name":"Attempts","Docs":"","Typewords":["int32"]},{"Name":"MaxAttempts","Docs":"","Typewords":["int32"]},{"Name":"DialedIPs","Docs":"","Typewords":["{}","[]","IP"]},{"Name":"NextAttempt","Docs":"","Typewords":["timestamp"]},{"Name":"LastAttempt","Docs":"","Typewords":["nullable","timestamp"]},{"Name":"LastError","Docs":"","Typewords":["string"]},{"Name":"Has8bit","Docs":"","Typewords":["bool"]},{"Name":"SMTPUTF8","Docs":"","Typewords":["bool"]},{"Name":"IsDMARCReport","Docs":"","Typewords":["bool"]},{"Name":"IsTLSReport","Docs":"","Typewords":["bool"]},{"Name":"Size","Docs":"","Typewords":["int64"]},{"Name":"MessageID","Docs":"","Typewords":["string"]},{"Name":"MsgPrefix","Docs":"","Typewords":["nullable","string"]},{"Name":"DSNUTF8","Docs":"","Typewords":["nullable","string"]},{"Name":"Transport","Docs":"","Typewords":["string"]},{"Name":"RequireTLS","Docs":"","Typewords":["nullable","bool"]},{"Name":"FutureReleaseRequest","Docs":"","Typewords":["string"]}]}, + "Filter": {"Name":"Filter","Docs":"","Fields":[{"Name":"Max","Docs":"","Typewords":["int32"]},{"Name":"IDs","Docs":"","Typewords":["[]","int64"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"From","Docs":"","Typewords":["string"]},{"Name":"To","Docs":"","Typewords":["string"]},{"Name":"Hold","Docs":"","Typewords":["nullable","bool"]},{"Name":"Submitted","Docs":"","Typewords":["string"]},{"Name":"NextAttempt","Docs":"","Typewords":["string"]},{"Name":"Transport","Docs":"","Typewords":["nullable","string"]}]}, + "Sort": {"Name":"Sort","Docs":"","Fields":[{"Name":"Field","Docs":"","Typewords":["string"]},{"Name":"LastID","Docs":"","Typewords":["int64"]},{"Name":"Last","Docs":"","Typewords":["any"]},{"Name":"Asc","Docs":"","Typewords":["bool"]}]}, + "Msg": {"Name":"Msg","Docs":"","Fields":[{"Name":"ID","Docs":"","Typewords":["int64"]},{"Name":"BaseID","Docs":"","Typewords":["int64"]},{"Name":"Queued","Docs":"","Typewords":["timestamp"]},{"Name":"Hold","Docs":"","Typewords":["bool"]},{"Name":"SenderAccount","Docs":"","Typewords":["string"]},{"Name":"SenderLocalpart","Docs":"","Typewords":["Localpart"]},{"Name":"SenderDomain","Docs":"","Typewords":["IPDomain"]},{"Name":"SenderDomainStr","Docs":"","Typewords":["string"]},{"Name":"FromID","Docs":"","Typewords":["string"]},{"Name":"RecipientLocalpart","Docs":"","Typewords":["Localpart"]},{"Name":"RecipientDomain","Docs":"","Typewords":["IPDomain"]},{"Name":"RecipientDomainStr","Docs":"","Typewords":["string"]},{"Name":"Attempts","Docs":"","Typewords":["int32"]},{"Name":"MaxAttempts","Docs":"","Typewords":["int32"]},{"Name":"DialedIPs","Docs":"","Typewords":["{}","[]","IP"]},{"Name":"NextAttempt","Docs":"","Typewords":["timestamp"]},{"Name":"LastAttempt","Docs":"","Typewords":["nullable","timestamp"]},{"Name":"Results","Docs":"","Typewords":["[]","MsgResult"]},{"Name":"Has8bit","Docs":"","Typewords":["bool"]},{"Name":"SMTPUTF8","Docs":"","Typewords":["bool"]},{"Name":"IsDMARCReport","Docs":"","Typewords":["bool"]},{"Name":"IsTLSReport","Docs":"","Typewords":["bool"]},{"Name":"Size","Docs":"","Typewords":["int64"]},{"Name":"MessageID","Docs":"","Typewords":["string"]},{"Name":"MsgPrefix","Docs":"","Typewords":["nullable","string"]},{"Name":"Subject","Docs":"","Typewords":["string"]},{"Name":"DSNUTF8","Docs":"","Typewords":["nullable","string"]},{"Name":"Transport","Docs":"","Typewords":["string"]},{"Name":"RequireTLS","Docs":"","Typewords":["nullable","bool"]},{"Name":"FutureReleaseRequest","Docs":"","Typewords":["string"]},{"Name":"Extra","Docs":"","Typewords":["{}","string"]}]}, "IPDomain": {"Name":"IPDomain","Docs":"","Fields":[{"Name":"IP","Docs":"","Typewords":["IP"]},{"Name":"Domain","Docs":"","Typewords":["Domain"]}]}, + "MsgResult": {"Name":"MsgResult","Docs":"","Fields":[{"Name":"Start","Docs":"","Typewords":["timestamp"]},{"Name":"Duration","Docs":"","Typewords":["int64"]},{"Name":"Success","Docs":"","Typewords":["bool"]},{"Name":"Code","Docs":"","Typewords":["int32"]},{"Name":"Secode","Docs":"","Typewords":["string"]},{"Name":"Error","Docs":"","Typewords":["string"]}]}, + "RetiredFilter": {"Name":"RetiredFilter","Docs":"","Fields":[{"Name":"Max","Docs":"","Typewords":["int32"]},{"Name":"IDs","Docs":"","Typewords":["[]","int64"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"From","Docs":"","Typewords":["string"]},{"Name":"To","Docs":"","Typewords":["string"]},{"Name":"Submitted","Docs":"","Typewords":["string"]},{"Name":"LastActivity","Docs":"","Typewords":["string"]},{"Name":"Transport","Docs":"","Typewords":["nullable","string"]},{"Name":"Success","Docs":"","Typewords":["nullable","bool"]}]}, + "RetiredSort": {"Name":"RetiredSort","Docs":"","Fields":[{"Name":"Field","Docs":"","Typewords":["string"]},{"Name":"LastID","Docs":"","Typewords":["int64"]},{"Name":"Last","Docs":"","Typewords":["any"]},{"Name":"Asc","Docs":"","Typewords":["bool"]}]}, + "MsgRetired": {"Name":"MsgRetired","Docs":"","Fields":[{"Name":"ID","Docs":"","Typewords":["int64"]},{"Name":"BaseID","Docs":"","Typewords":["int64"]},{"Name":"Queued","Docs":"","Typewords":["timestamp"]},{"Name":"SenderAccount","Docs":"","Typewords":["string"]},{"Name":"SenderLocalpart","Docs":"","Typewords":["Localpart"]},{"Name":"SenderDomainStr","Docs":"","Typewords":["string"]},{"Name":"FromID","Docs":"","Typewords":["string"]},{"Name":"RecipientLocalpart","Docs":"","Typewords":["Localpart"]},{"Name":"RecipientDomain","Docs":"","Typewords":["IPDomain"]},{"Name":"RecipientDomainStr","Docs":"","Typewords":["string"]},{"Name":"Attempts","Docs":"","Typewords":["int32"]},{"Name":"MaxAttempts","Docs":"","Typewords":["int32"]},{"Name":"DialedIPs","Docs":"","Typewords":["{}","[]","IP"]},{"Name":"LastAttempt","Docs":"","Typewords":["nullable","timestamp"]},{"Name":"Results","Docs":"","Typewords":["[]","MsgResult"]},{"Name":"Has8bit","Docs":"","Typewords":["bool"]},{"Name":"SMTPUTF8","Docs":"","Typewords":["bool"]},{"Name":"IsDMARCReport","Docs":"","Typewords":["bool"]},{"Name":"IsTLSReport","Docs":"","Typewords":["bool"]},{"Name":"Size","Docs":"","Typewords":["int64"]},{"Name":"MessageID","Docs":"","Typewords":["string"]},{"Name":"Subject","Docs":"","Typewords":["string"]},{"Name":"Transport","Docs":"","Typewords":["string"]},{"Name":"RequireTLS","Docs":"","Typewords":["nullable","bool"]},{"Name":"FutureReleaseRequest","Docs":"","Typewords":["string"]},{"Name":"Extra","Docs":"","Typewords":["{}","string"]},{"Name":"LastActivity","Docs":"","Typewords":["timestamp"]},{"Name":"RecipientAddress","Docs":"","Typewords":["string"]},{"Name":"Success","Docs":"","Typewords":["bool"]},{"Name":"KeepUntil","Docs":"","Typewords":["timestamp"]}]}, + "HookFilter": {"Name":"HookFilter","Docs":"","Fields":[{"Name":"Max","Docs":"","Typewords":["int32"]},{"Name":"IDs","Docs":"","Typewords":["[]","int64"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"Submitted","Docs":"","Typewords":["string"]},{"Name":"NextAttempt","Docs":"","Typewords":["string"]},{"Name":"Event","Docs":"","Typewords":["string"]}]}, + "HookSort": {"Name":"HookSort","Docs":"","Fields":[{"Name":"Field","Docs":"","Typewords":["string"]},{"Name":"LastID","Docs":"","Typewords":["int64"]},{"Name":"Last","Docs":"","Typewords":["any"]},{"Name":"Asc","Docs":"","Typewords":["bool"]}]}, + "Hook": {"Name":"Hook","Docs":"","Fields":[{"Name":"ID","Docs":"","Typewords":["int64"]},{"Name":"QueueMsgID","Docs":"","Typewords":["int64"]},{"Name":"FromID","Docs":"","Typewords":["string"]},{"Name":"MessageID","Docs":"","Typewords":["string"]},{"Name":"Subject","Docs":"","Typewords":["string"]},{"Name":"Extra","Docs":"","Typewords":["{}","string"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"URL","Docs":"","Typewords":["string"]},{"Name":"Authorization","Docs":"","Typewords":["string"]},{"Name":"IsIncoming","Docs":"","Typewords":["bool"]},{"Name":"OutgoingEvent","Docs":"","Typewords":["string"]},{"Name":"Payload","Docs":"","Typewords":["string"]},{"Name":"Submitted","Docs":"","Typewords":["timestamp"]},{"Name":"Attempts","Docs":"","Typewords":["int32"]},{"Name":"NextAttempt","Docs":"","Typewords":["timestamp"]},{"Name":"Results","Docs":"","Typewords":["[]","HookResult"]}]}, + "HookResult": {"Name":"HookResult","Docs":"","Fields":[{"Name":"Start","Docs":"","Typewords":["timestamp"]},{"Name":"Duration","Docs":"","Typewords":["int64"]},{"Name":"URL","Docs":"","Typewords":["string"]},{"Name":"Success","Docs":"","Typewords":["bool"]},{"Name":"Code","Docs":"","Typewords":["int32"]},{"Name":"Error","Docs":"","Typewords":["string"]},{"Name":"Response","Docs":"","Typewords":["string"]}]}, + "HookRetiredFilter": {"Name":"HookRetiredFilter","Docs":"","Fields":[{"Name":"Max","Docs":"","Typewords":["int32"]},{"Name":"IDs","Docs":"","Typewords":["[]","int64"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"Submitted","Docs":"","Typewords":["string"]},{"Name":"LastActivity","Docs":"","Typewords":["string"]},{"Name":"Event","Docs":"","Typewords":["string"]}]}, + "HookRetiredSort": {"Name":"HookRetiredSort","Docs":"","Fields":[{"Name":"Field","Docs":"","Typewords":["string"]},{"Name":"LastID","Docs":"","Typewords":["int64"]},{"Name":"Last","Docs":"","Typewords":["any"]},{"Name":"Asc","Docs":"","Typewords":["bool"]}]}, + "HookRetired": {"Name":"HookRetired","Docs":"","Fields":[{"Name":"ID","Docs":"","Typewords":["int64"]},{"Name":"QueueMsgID","Docs":"","Typewords":["int64"]},{"Name":"FromID","Docs":"","Typewords":["string"]},{"Name":"MessageID","Docs":"","Typewords":["string"]},{"Name":"Subject","Docs":"","Typewords":["string"]},{"Name":"Extra","Docs":"","Typewords":["{}","string"]},{"Name":"Account","Docs":"","Typewords":["string"]},{"Name":"URL","Docs":"","Typewords":["string"]},{"Name":"Authorization","Docs":"","Typewords":["bool"]},{"Name":"IsIncoming","Docs":"","Typewords":["bool"]},{"Name":"OutgoingEvent","Docs":"","Typewords":["string"]},{"Name":"Payload","Docs":"","Typewords":["string"]},{"Name":"Submitted","Docs":"","Typewords":["timestamp"]},{"Name":"SupersededByID","Docs":"","Typewords":["int64"]},{"Name":"Attempts","Docs":"","Typewords":["int32"]},{"Name":"Results","Docs":"","Typewords":["[]","HookResult"]},{"Name":"Success","Docs":"","Typewords":["bool"]},{"Name":"LastActivity","Docs":"","Typewords":["timestamp"]},{"Name":"KeepUntil","Docs":"","Typewords":["timestamp"]}]}, "WebserverConfig": {"Name":"WebserverConfig","Docs":"","Fields":[{"Name":"WebDNSDomainRedirects","Docs":"","Typewords":["[]","[]","Domain"]},{"Name":"WebDomainRedirects","Docs":"","Typewords":["[]","[]","string"]},{"Name":"WebHandlers","Docs":"","Typewords":["[]","WebHandler"]}]}, "WebHandler": {"Name":"WebHandler","Docs":"","Fields":[{"Name":"LogName","Docs":"","Typewords":["string"]},{"Name":"Domain","Docs":"","Typewords":["string"]},{"Name":"PathRegexp","Docs":"","Typewords":["string"]},{"Name":"DontRedirectPlainHTTP","Docs":"","Typewords":["bool"]},{"Name":"Compress","Docs":"","Typewords":["bool"]},{"Name":"WebStatic","Docs":"","Typewords":["nullable","WebStatic"]},{"Name":"WebRedirect","Docs":"","Typewords":["nullable","WebRedirect"]},{"Name":"WebForward","Docs":"","Typewords":["nullable","WebForward"]},{"Name":"Name","Docs":"","Typewords":["string"]},{"Name":"DNSDomain","Docs":"","Typewords":["Domain"]}]}, "WebStatic": {"Name":"WebStatic","Docs":"","Fields":[{"Name":"StripPrefix","Docs":"","Typewords":["string"]},{"Name":"Root","Docs":"","Typewords":["string"]},{"Name":"ListFiles","Docs":"","Typewords":["bool"]},{"Name":"ContinueNotFound","Docs":"","Typewords":["bool"]},{"Name":"ResponseHeaders","Docs":"","Typewords":["{}","string"]}]}, @@ -1020,6 +1228,8 @@ export const parser = { AutodiscoverCheckResult: (v: any) => parse("AutodiscoverCheckResult", v) as AutodiscoverCheckResult, AutodiscoverSRV: (v: any) => parse("AutodiscoverSRV", v) as AutodiscoverSRV, Account: (v: any) => parse("Account", v) as Account, + OutgoingWebhook: (v: any) => parse("OutgoingWebhook", v) as OutgoingWebhook, + IncomingWebhook: (v: any) => parse("IncomingWebhook", v) as IncomingWebhook, Destination: (v: any) => parse("Destination", v) as Destination, Ruleset: (v: any) => parse("Ruleset", v) as Ruleset, SubjectPass: (v: any) => parse("SubjectPass", v) as SubjectPass, @@ -1053,8 +1263,20 @@ export const parser = { ClientConfigsEntry: (v: any) => parse("ClientConfigsEntry", v) as ClientConfigsEntry, HoldRule: (v: any) => parse("HoldRule", v) as HoldRule, Filter: (v: any) => parse("Filter", v) as Filter, + Sort: (v: any) => parse("Sort", v) as Sort, Msg: (v: any) => parse("Msg", v) as Msg, IPDomain: (v: any) => parse("IPDomain", v) as IPDomain, + MsgResult: (v: any) => parse("MsgResult", v) as MsgResult, + RetiredFilter: (v: any) => parse("RetiredFilter", v) as RetiredFilter, + RetiredSort: (v: any) => parse("RetiredSort", v) as RetiredSort, + MsgRetired: (v: any) => parse("MsgRetired", v) as MsgRetired, + HookFilter: (v: any) => parse("HookFilter", v) as HookFilter, + HookSort: (v: any) => parse("HookSort", v) as HookSort, + Hook: (v: any) => parse("Hook", v) as Hook, + HookResult: (v: any) => parse("HookResult", v) as HookResult, + HookRetiredFilter: (v: any) => parse("HookRetiredFilter", v) as HookRetiredFilter, + HookRetiredSort: (v: any) => parse("HookRetiredSort", v) as HookRetiredSort, + HookRetired: (v: any) => parse("HookRetired", v) as HookRetired, WebserverConfig: (v: any) => parse("WebserverConfig", v) as WebserverConfig, WebHandler: (v: any) => parse("WebHandler", v) as WebHandler, WebStatic: (v: any) => parse("WebStatic", v) as WebStatic, @@ -1457,11 +1679,11 @@ export class Client { } // QueueList returns the messages currently in the outgoing queue. - async QueueList(filter: Filter): Promise { + async QueueList(filter: Filter, sort: Sort): Promise { const fn: string = "QueueList" - const paramTypes: string[][] = [["Filter"]] + const paramTypes: string[][] = [["Filter"],["Sort"]] const returnTypes: string[][] = [["[]","Msg"]] - const params: any[] = [filter] + const params: any[] = [filter, sort] return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as Msg[] | null } @@ -1532,6 +1754,72 @@ export class Client { return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as number } + // RetiredList returns messages retired from the queue (delivery could + // have succeeded or failed). + async RetiredList(filter: RetiredFilter, sort: RetiredSort): Promise { + const fn: string = "RetiredList" + const paramTypes: string[][] = [["RetiredFilter"],["RetiredSort"]] + const returnTypes: string[][] = [["[]","MsgRetired"]] + const params: any[] = [filter, sort] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as MsgRetired[] | null + } + + // HookQueueSize returns the number of webhooks still to be delivered. + async HookQueueSize(): Promise { + const fn: string = "HookQueueSize" + const paramTypes: string[][] = [] + const returnTypes: string[][] = [["int32"]] + const params: any[] = [] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as number + } + + // HookList lists webhooks still to be delivered. + async HookList(filter: HookFilter, sort: HookSort): Promise { + const fn: string = "HookList" + const paramTypes: string[][] = [["HookFilter"],["HookSort"]] + const returnTypes: string[][] = [["[]","Hook"]] + const params: any[] = [filter, sort] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as Hook[] | null + } + + // HookNextAttemptSet sets a new time for next delivery attempt of matching + // hooks from the queue. + async HookNextAttemptSet(filter: HookFilter, minutes: number): Promise { + const fn: string = "HookNextAttemptSet" + const paramTypes: string[][] = [["HookFilter"],["int32"]] + const returnTypes: string[][] = [["int32"]] + const params: any[] = [filter, minutes] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as number + } + + // HookNextAttemptAdd adds a duration to the time of next delivery attempt of + // matching hooks from the queue. + async HookNextAttemptAdd(filter: HookFilter, minutes: number): Promise { + const fn: string = "HookNextAttemptAdd" + const paramTypes: string[][] = [["HookFilter"],["int32"]] + const returnTypes: string[][] = [["int32"]] + const params: any[] = [filter, minutes] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as number + } + + // HookRetiredList lists retired webhooks. + async HookRetiredList(filter: HookRetiredFilter, sort: HookRetiredSort): Promise { + const fn: string = "HookRetiredList" + const paramTypes: string[][] = [["HookRetiredFilter"],["HookRetiredSort"]] + const returnTypes: string[][] = [["[]","HookRetired"]] + const params: any[] = [filter, sort] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as HookRetired[] | null + } + + // HookCancel prevents further delivery attempts of matching webhooks. + async HookCancel(filter: HookFilter): Promise { + const fn: string = "HookCancel" + const paramTypes: string[][] = [["HookFilter"]] + const returnTypes: string[][] = [["int32"]] + const params: any[] = [filter] + return await _sherpaCall(this.baseURL, this.authState, { ...this.options }, paramTypes, returnTypes, fn, params) as number + } + // LogLevels returns the current log levels. async LogLevels(): Promise<{ [key: string]: string }> { const fn: string = "LogLevels" diff --git a/webapi/client.go b/webapi/client.go new file mode 100644 index 0000000..b769df6 --- /dev/null +++ b/webapi/client.go @@ -0,0 +1,244 @@ +package webapi + +import ( + "context" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "strings" +) + +// Client can be used to call webapi methods. +// Client implements [Methods]. +type Client struct { + BaseURL string // For example: http://localhost:1080/webapi/v0/. + Username string // Added as HTTP basic authentication if not empty. + Password string + HTTPClient *http.Client // Optional, defaults to http.DefaultClient. +} + +var _ Methods = Client{} + +func (c Client) httpClient() *http.Client { + if c.HTTPClient != nil { + return c.HTTPClient + } + return http.DefaultClient +} + +func transact[T any](ctx context.Context, c Client, fn string, req any) (resp T, rerr error) { + hresp, err := httpDo(ctx, c, fn, req) + if err != nil { + return resp, err + } + defer hresp.Body.Close() + + if hresp.StatusCode == http.StatusOK { + // Text and HTML of a message can each be 1MB. Another MB for other data would be a + // lot. + err := json.NewDecoder(&limitReader{hresp.Body, 3 * 1024 * 1024}).Decode(&resp) + return resp, err + } + return resp, badResponse(hresp) +} + +func transactReadCloser(ctx context.Context, c Client, fn string, req any) (resp io.ReadCloser, rerr error) { + hresp, err := httpDo(ctx, c, fn, req) + if err != nil { + return nil, err + } + body := hresp.Body + defer func() { + if body != nil { + body.Close() + } + }() + if hresp.StatusCode == http.StatusOK { + r := body + body = nil + return r, nil + } + return nil, badResponse(hresp) +} + +func httpDo(ctx context.Context, c Client, fn string, req any) (*http.Response, error) { + reqbuf, err := json.Marshal(req) + if err != nil { + return nil, fmt.Errorf("marshal request: %v", err) + } + data := url.Values{} + data.Add("request", string(reqbuf)) + hreq, err := http.NewRequestWithContext(ctx, "POST", c.BaseURL+fn, strings.NewReader(data.Encode())) + if err != nil { + return nil, fmt.Errorf("new request: %v", err) + } + hreq.Header.Set("Content-Type", "application/x-www-form-urlencoded") + if c.Username != "" { + hreq.SetBasicAuth(c.Username, c.Password) + } + hresp, err := c.httpClient().Do(hreq) + if err != nil { + return nil, fmt.Errorf("http transaction: %v", err) + } + return hresp, nil +} + +func badResponse(hresp *http.Response) error { + if hresp.StatusCode != http.StatusBadRequest { + return fmt.Errorf("http status %v, expected 200 ok", hresp.Status) + } + buf, err := io.ReadAll(&limitReader{R: hresp.Body, Limit: 10 * 1024}) + if err != nil { + return fmt.Errorf("reading error from remote: %v", err) + } + var xerr Error + err = json.Unmarshal(buf, &xerr) + if err != nil { + if len(buf) > 512 { + buf = buf[:512] + } + return fmt.Errorf("error parsing error from remote: %v (first 512 bytes of response: %s)", err, string(buf)) + } + return xerr +} + +// Send composes a message and submits it to the queue for delivery for all +// recipients (to, cc, bcc). +// +// Configure your account to use unique SMTP MAIL FROM addresses ("fromid") and to +// keep history of retired messages, for better handling of transactional email, +// automatically managing a suppression list. +// +// Configure webhooks to receive updates about deliveries. +// +// If the request is a multipart/form-data, uploaded files with the form keys +// "inlinefile" and/or "attachedfile" will be added to the message. If the uploaded +// file has content-type and/or content-id headers, they will be included. If no +// content-type is present in the request, and it can be detected, it is included +// automatically. +// +// Example call with a text and html message, with an inline and an attached image: +// +// curl --user mox@localhost:moxmoxmox \ +// --form request='{"To": [{"Address": "mox@localhost"}], "Text": "hi ☺", "HTML": ""}' \ +// --form 'inlinefile=@hi.png;headers="Content-ID: "' \ +// --form attachedfile=@mox.png \ +// http://localhost:1080/webapi/v0/Send +// +// Error codes: +// +// - badAddress, if an email address is invalid. +// - missingBody, if no text and no html body was specified. +// - multipleFrom, if multiple from addresses were specified. +// - badFrom, if a from address was specified that isn't configured for the account. +// - noRecipients, if no recipients were specified. +// - messageLimitReached, if the outgoing message rate limit was reached. +// - recipientLimitReached, if the outgoing new recipient rate limit was reached. +// - messageTooLarge, message larger than configured maximum size. +// - malformedMessageID, if MessageID is specified but invalid. +// - sentOverQuota, message submitted, but not stored in Sent mailbox due to quota reached. +func (c Client) Send(ctx context.Context, req SendRequest) (resp SendResult, err error) { + return transact[SendResult](ctx, c, "Send", req) +} + +// SuppressionList returns the addresses on the per-account suppression list. +func (c Client) SuppressionList(ctx context.Context, req SuppressionListRequest) (resp SuppressionListResult, err error) { + return transact[SuppressionListResult](ctx, c, "SuppressionList", req) +} + +// SuppressionAdd adds an address to the suppression list of the account. +// +// Error codes: +// +// - badAddress, if the email address is invalid. +func (c Client) SuppressionAdd(ctx context.Context, req SuppressionAddRequest) (resp SuppressionAddResult, err error) { + return transact[SuppressionAddResult](ctx, c, "SuppressionAdd", req) +} + +// SuppressionRemove removes an address from the suppression list of the account. +// +// Error codes: +// +// - badAddress, if the email address is invalid. +func (c Client) SuppressionRemove(ctx context.Context, req SuppressionRemoveRequest) (resp SuppressionRemoveResult, err error) { + return transact[SuppressionRemoveResult](ctx, c, "SuppressionRemove", req) +} + +// SuppressionPresent returns whether an address is present in the suppression list of the account. +// +// Error codes: +// +// - badAddress, if the email address is invalid. +func (c Client) SuppressionPresent(ctx context.Context, req SuppressionPresentRequest) (resp SuppressionPresentResult, err error) { + return transact[SuppressionPresentResult](ctx, c, "SuppressionPresent", req) +} + +// MessageGet returns a message from the account storage in parsed form. +// +// Use [Client.MessageRawGet] for the raw message (internet message file). +// +// Error codes: +// - messageNotFound, if the message does not exist. +func (c Client) MessageGet(ctx context.Context, req MessageGetRequest) (resp MessageGetResult, err error) { + return transact[MessageGetResult](ctx, c, "MessageGet", req) +} + +// MessageRawGet returns the full message in its original form, as stored on disk. +// +// Error codes: +// - messageNotFound, if the message does not exist. +func (c Client) MessageRawGet(ctx context.Context, req MessageRawGetRequest) (resp io.ReadCloser, err error) { + return transactReadCloser(ctx, c, "MessageRawGet", req) +} + +// MessagePartGet returns a single part from a multipart message, by a "parts +// path", a series of indices into the multipart hierarchy as seen in the parsed +// message. The initial selection is the body of the outer message (excluding +// headers). +// +// Error codes: +// - messageNotFound, if the message does not exist. +// - partNotFound, if the part does not exist. +func (c Client) MessagePartGet(ctx context.Context, req MessagePartGetRequest) (resp io.ReadCloser, err error) { + return transactReadCloser(ctx, c, "MessagePartGet", req) +} + +// MessageDelete permanently removes a message from the account storage (not moving +// to a Trash folder). +// +// Error codes: +// - messageNotFound, if the message does not exist. +func (c Client) MessageDelete(ctx context.Context, req MessageDeleteRequest) (resp MessageDeleteResult, err error) { + return transact[MessageDeleteResult](ctx, c, "MessageDelete", req) +} + +// MessageFlagsAdd adds (sets) flags on a message, like the well-known flags +// beginning with a backslash like \seen, \answered, \draft, or well-known flags +// beginning with a dollar like $junk, $notjunk, $forwarded, or custom flags. +// Existing flags are left unchanged. +// +// Error codes: +// - messageNotFound, if the message does not exist. +func (c Client) MessageFlagsAdd(ctx context.Context, req MessageFlagsAddRequest) (resp MessageFlagsAddResult, err error) { + return transact[MessageFlagsAddResult](ctx, c, "MessageFlagsAdd", req) +} + +// MessageFlagsRemove removes (clears) flags on a message. +// Other flags are left unchanged. +// +// Error codes: +// - messageNotFound, if the message does not exist. +func (c Client) MessageFlagsRemove(ctx context.Context, req MessageFlagsRemoveRequest) (resp MessageFlagsRemoveResult, err error) { + return transact[MessageFlagsRemoveResult](ctx, c, "MessageFlagsRemove", req) +} + +// MessageMove moves a message to a new mailbox name (folder). The destination +// mailbox name must already exist. +// +// Error codes: +// - messageNotFound, if the message does not exist. +func (c Client) MessageMove(ctx context.Context, req MessageMoveRequest) (resp MessageMoveResult, err error) { + return transact[MessageMoveResult](ctx, c, "MessageMove", req) +} diff --git a/webapi/doc.go b/webapi/doc.go new file mode 100644 index 0000000..43a7fee --- /dev/null +++ b/webapi/doc.go @@ -0,0 +1,367 @@ +// NOTE: DO NOT EDIT, this file is generated by gendoc.sh. + +/* +Package webapi implements a simple HTTP/JSON-based API for interacting with +email, and webhooks for notifications about incoming and outgoing deliveries, +including delivery failures. + +# Overview + +The webapi can be used to compose and send outgoing messages. The HTTP/JSON +API is often easier to use for developers since it doesn't require separate +libraries and/or having (detailed) knowledge about the format of email messages +("Internet Message Format"), or the SMTP protocol and its extensions. + +Webhooks can be configured per account, and help with automated processing of +incoming email, and with handling delivery failures/success. Webhooks are +often easier to use for developers than monitoring a mailbox with IMAP and +processing new incoming email and delivery status notification (DSN) messages. + +# Webapi + +The webapi has a base URL at /webapi/v0/ by default, but configurable, which +serves an introduction that points to this documentation and lists the API +methods available. + +An HTTP POST to /webapi/v0/ calls a method.The form can be either +"application/x-www-form-urlencoded" or "multipart/form-data". Form field +"request" must contain the request parameters, encoded as JSON. + +HTTP basic authentication is required for calling methods, with an email +address as user name. Use a login address configured for "unique SMTP MAIL +FROM" addresses, and configure a period to "keep retired messages delivered +from the queue" for automatic suppression list management. + +HTTP response status 200 OK indicates a successful method call, status 400 +indicates an error. The response body of an error is a JSON object with a +human-readable "Message" field, and a "Code" field for programmatic handling +(common codes: "user" or user-induced errors, "server" for server-caused +errors). Most successful calls return a JSON object, but some return data +(e.g. a raw message or an attachment of a message). See [Methods] for the +methods and and [Client] for their documentation. The first element of their +return values indicate their JSON object type or io.ReadCloser for non-JSON +data. The request and response types are converted from/to JSON. optional and +missing/empty fields/values are converted into Go zero values: zero for +numbers, empty strings, empty lists and empty objects. New fields may be added +in response objects in future versions, parsers should ignore unrecognized +fields. + +An HTTP GET to a method URL serves an HTML page showing example +request/response JSON objects in a form and a button to call the method. + +# Webhooks + +Webhooks for outgoing delivery events and incoming deliveries are configured +per account. + +A webhook is delivered by an HTTP POST with headers "X-Mox-Webhook-ID" (unique +ID of webhook) and "X-Mox-Webhook-Attempt" (number of delivery attempts, +starting at 1), and a JSON body with the webhook data. Webhook delivery +failures are retried at a schedule similar to message deliveries, until +permanent failure. + +See [webhook.Outgoing] for the fields in a webhook for outgoing deliveries, and +in particular [webhook.OutgoingEvent] for the types of events. + +Only the latest event for the delivery of a particular outgoing message will be +delivered, any webhooks for that message still in the queue (after failure to +deliver) are retired as superseded when a new event occurs. + +Webhooks for incoming deliveries are configured separately from outgoing +deliveries. Incoming DSNs for previously sent messages do not cause a webhook +to the webhook URL for incoming messages, only to the webhook URL for outgoing +delivery events. The incoming webhook JSON payload contains the message +envelope (parsed To, Cc, Bcc, Subject and more headers), the MIME structure, +and the contents of the first text and HTML parts. See [webhook.Incoming] for +the fields in the JSON object. The full message and individual parts, including +attachments, can be retrieved using the webapi. + +# Transactional email + +When sending transactional emails, potentially to many recipients, it is +important to process delivery failure notifications. If messages are rejected, +or email addresses no longer exist, you should stop sending email to those +addresses. If you try to keep sending, the receiving mail servers may consider +that spammy behaviour and blocklist your mail server. + +Automatic suppression list management already prevents most repeated sending +attempts. The webhooks make it easy to receive failure notifications. + +To keep spam complaints about your messages a minimum, include links to +unsubscribe from future messages without requiring further actions from the +user, such as logins. Include an unsubscribe link in the footer, and include +List-* message headers, such as List-Id, List-Unsubscribe and +List-Unsubscribe-Post. + +# Webapi examples + +Below are examples for making webapi calls to a locally running "mox +localserve" with its default credentials. + +Send a basic message: + + $ curl --user mox@localhost:moxmoxmox \ + --data request='{"To": [{"Address": "mox@localhost"}], "Text": "hi ☺"}' \ + http://localhost:1080/webapi/v0/Send + { + "MessageID": "", + "Submissions": [ + { + "Address": "mox@localhost", + "QueueMsgID": 10010, + "FromID": "ZfV16EATHwKEufrSMo055Q" + } + ] + } + +Send a message with files both from form upload and base64 included in JSON: + + $ curl --user mox@localhost:moxmoxmox \ + --form request='{"To": [{"Address": "mox@localhost"}], "Subject": "hello", "Text": "hi ☺", "HTML": "", "AttachedFiles": [{"Name": "img.png", "ContentType": "image/png", "Data": "bWFkZSB5b3UgbG9vayE="}]}' \ + --form 'inlinefile=@hi.png;headers="Content-ID: "' \ + --form attachedfile=@mox.png \ + http://localhost:1080/webapi/v0/Send + { + "MessageID": "", + "Submissions": [ + { + "Address": "mox@localhost", + "QueueMsgID": 10011, + "FromID": "yWiUQ6mvJND8FRPSmc9y5A" + } + ] + } + +Get a message in parsed form: + + $ curl --user mox@localhost:moxmoxmox --data request='{"MsgID": 424}' http://localhost:1080/webapi/v0/MessageGet + { + "Message": { + "From": [ + { + "Name": "mox", + "Address": "mox@localhost" + } + ], + "To": [ + { + "Name": "", + "Address": "mox@localhost" + } + ], + "CC": [], + "BCC": [], + "ReplyTo": [], + "MessageID": "<84vCeme_yZXyDzjWDeYBpg@localhost>", + "References": [], + "Date": "2024-04-04T14:29:42+02:00", + "Subject": "hello", + "Text": "hi \u263a\n", + "HTML": "" + }, + "Structure": { + "ContentType": "multipart/mixed", + "ContentTypeParams": { + "boundary": "0ee72dc30dbab2ca6f7a363844a10a9f6111fc6dd31b8ff0b261478c2c48" + }, + "ContentID": "", + "DecodedSize": 0, + "Parts": [ + { + "ContentType": "multipart/related", + "ContentTypeParams": { + "boundary": "b5ed0977ee2b628040f394c3f374012458379a4f3fcda5036371d761c81d" + }, + "ContentID": "", + "DecodedSize": 0, + "Parts": [ + { + "ContentType": "multipart/alternative", + "ContentTypeParams": { + "boundary": "3759771adede7bd191ef37f2aa0e49ff67369f4000c320f198a875e96487" + }, + "ContentID": "", + "DecodedSize": 0, + "Parts": [ + { + "ContentType": "text/plain", + "ContentTypeParams": { + "charset": "utf-8" + }, + "ContentID": "", + "DecodedSize": 8, + "Parts": [] + }, + { + "ContentType": "text/html", + "ContentTypeParams": { + "charset": "us-ascii" + }, + "ContentID": "", + "DecodedSize": 22, + "Parts": [] + } + ] + }, + { + "ContentType": "image/png", + "ContentTypeParams": {}, + "ContentID": "", + "DecodedSize": 19375, + "Parts": [] + } + ] + }, + { + "ContentType": "image/png", + "ContentTypeParams": {}, + "ContentID": "", + "DecodedSize": 14, + "Parts": [] + }, + { + "ContentType": "image/png", + "ContentTypeParams": {}, + "ContentID": "", + "DecodedSize": 7766, + "Parts": [] + } + ] + }, + "Meta": { + "Size": 38946, + "DSN": false, + "Flags": [ + "$notjunk", + "\seen" + ], + "MailFrom": "", + "MailFromValidated": false, + "MsgFrom": "", + "MsgFromValidated": false, + "DKIMVerifiedDomains": [], + "RemoteIP": "", + "MailboxName": "Inbox" + } + } + +Errors (with a 400 bad request HTTP status response) include a human-readable +message and a code for programmatic use: + + $ curl --user mox@localhost:moxmoxmox --data request='{"MsgID": 999}' http://localhost:1080/webapi/v0/MessageGet + { + "Code": "notFound", + "Message": "message not found" + } + +Get a raw, unparsed message, as bytes: + + $ curl --user mox@localhost:moxmoxmox --data request='{"MsgID": 123}' http://localhost:1080/webapi/v0/MessageRawGet + [message as bytes in raw form] + +Mark a message as read: + + $ curl --user mox@localhost:moxmoxmox --data request='{"MsgID": 424, "Flags": ["\\Seen", "custom"]}' http://localhost:1080/webapi/v0/MessageFlagsAdd + {} + +# Webhook examples + +A webhook is delivered by an HTTP POST, wich headers X-Mox-Webhook-ID and +X-Mox-Webhook-Attempt and a JSON body with the data. To simulate a webhook call +for incoming messages, use: + + curl -H 'X-Mox-Webhook-ID: 123' -H 'X-Mox-Webhook-Attempt: 1' --json '{...}' http://localhost/yourapp + +Example webhook HTTP POST JSON body for successful outgoing delivery: + + { + "Version": 0, + "Event": "delivered", + "DSN": false, + "Suppressing": false, + "QueueMsgID": 101, + "FromID": "MDEyMzQ1Njc4OWFiY2RlZg", + "MessageID": "", + "Subject": "subject of original message", + "WebhookQueued": "2024-03-27T00:00:00Z", + "SMTPCode": 250, + "SMTPEnhancedCode": "", + "Error": "", + "Extra": {} + } + +Example webhook HTTP POST JSON body for failed delivery based on incoming DSN +message, with custom extra data fields (from original submission), and adding address to the suppression list: + + { + "Version": 0, + "Event": "failed", + "DSN": true, + "Suppressing": true, + "QueueMsgID": 102, + "FromID": "MDEyMzQ1Njc4OWFiY2RlZg", + "MessageID": "", + "Subject": "subject of original message", + "WebhookQueued": "2024-03-27T00:00:00Z", + "SMTPCode": 554, + "SMTPEnhancedCode": "5.4.0", + "Error": "timeout connecting to host", + "Extra": { + "userid": "456" + } + } + +Example JSON body for webhooks for incoming delivery of basic message: + + { + "Version": 0, + "From": [ + { + "Name": "", + "Address": "mox@localhost" + } + ], + "To": [ + { + "Name": "", + "Address": "mjl@localhost" + } + ], + "CC": [], + "BCC": [], + "ReplyTo": [], + "Subject": "hi", + "MessageID": "", + "InReplyTo": "", + "References": [], + "Date": "2024-03-27T00:00:00Z", + "Text": "hello world ☺\n", + "HTML": "", + "Structure": { + "ContentType": "text/plain", + "ContentTypeParams": { + "charset": "utf-8" + }, + "ContentID": "", + "DecodedSize": 17, + "Parts": [] + }, + "Meta": { + "MsgID": 201, + "MailFrom": "mox@localhost", + "MailFromValidated": false, + "MsgFromValidated": true, + "RcptTo": "mjl@localhost", + "DKIMVerifiedDomains": [ + "localhost" + ], + "RemoteIP": "127.0.0.1", + "Received": "2024-03-27T00:00:03Z", + "MailboxName": "Inbox", + "Automated": false + } + } +*/ +package webapi + +// NOTE: DO NOT EDIT, this file is generated by gendoc.sh. diff --git a/webapi/gendoc.sh b/webapi/gendoc.sh new file mode 100755 index 0000000..85b1b20 --- /dev/null +++ b/webapi/gendoc.sh @@ -0,0 +1,297 @@ +#!/bin/bash +set -euo pipefail + +# this is run with .. as working directory. + +# note: outgoing hook events are in ../queue/hooks.go, ../mox-/config.go, ../queue.go and ../webapi/gendoc.sh. keep in sync. + +# todo: find some proper way to generate the curl commands and responses automatically... + +cat < calls a method.The form can be either +"application/x-www-form-urlencoded" or "multipart/form-data". Form field +"request" must contain the request parameters, encoded as JSON. + +HTTP basic authentication is required for calling methods, with an email +address as user name. Use a login address configured for "unique SMTP MAIL +FROM" addresses, and configure a period to "keep retired messages delivered +from the queue" for automatic suppression list management. + +HTTP response status 200 OK indicates a successful method call, status 400 +indicates an error. The response body of an error is a JSON object with a +human-readable "Message" field, and a "Code" field for programmatic handling +(common codes: "user" or user-induced errors, "server" for server-caused +errors). Most successful calls return a JSON object, but some return data +(e.g. a raw message or an attachment of a message). See [Methods] for the +methods and and [Client] for their documentation. The first element of their +return values indicate their JSON object type or io.ReadCloser for non-JSON +data. The request and response types are converted from/to JSON. optional and +missing/empty fields/values are converted into Go zero values: zero for +numbers, empty strings, empty lists and empty objects. New fields may be added +in response objects in future versions, parsers should ignore unrecognized +fields. + +An HTTP GET to a method URL serves an HTML page showing example +request/response JSON objects in a form and a button to call the method. + +# Webhooks + +Webhooks for outgoing delivery events and incoming deliveries are configured +per account. + +A webhook is delivered by an HTTP POST with headers "X-Mox-Webhook-ID" (unique +ID of webhook) and "X-Mox-Webhook-Attempt" (number of delivery attempts, +starting at 1), and a JSON body with the webhook data. Webhook delivery +failures are retried at a schedule similar to message deliveries, until +permanent failure. + +See [webhook.Outgoing] for the fields in a webhook for outgoing deliveries, and +in particular [webhook.OutgoingEvent] for the types of events. + +Only the latest event for the delivery of a particular outgoing message will be +delivered, any webhooks for that message still in the queue (after failure to +deliver) are retired as superseded when a new event occurs. + +Webhooks for incoming deliveries are configured separately from outgoing +deliveries. Incoming DSNs for previously sent messages do not cause a webhook +to the webhook URL for incoming messages, only to the webhook URL for outgoing +delivery events. The incoming webhook JSON payload contains the message +envelope (parsed To, Cc, Bcc, Subject and more headers), the MIME structure, +and the contents of the first text and HTML parts. See [webhook.Incoming] for +the fields in the JSON object. The full message and individual parts, including +attachments, can be retrieved using the webapi. + +# Transactional email + +When sending transactional emails, potentially to many recipients, it is +important to process delivery failure notifications. If messages are rejected, +or email addresses no longer exist, you should stop sending email to those +addresses. If you try to keep sending, the receiving mail servers may consider +that spammy behaviour and blocklist your mail server. + +Automatic suppression list management already prevents most repeated sending +attempts. The webhooks make it easy to receive failure notifications. + +To keep spam complaints about your messages a minimum, include links to +unsubscribe from future messages without requiring further actions from the +user, such as logins. Include an unsubscribe link in the footer, and include +List-* message headers, such as List-Id, List-Unsubscribe and +List-Unsubscribe-Post. + +# Webapi examples + +Below are examples for making webapi calls to a locally running "mox +localserve" with its default credentials. + +Send a basic message: + + \$ curl --user mox@localhost:moxmoxmox \\ + --data request='{"To": [{"Address": "mox@localhost"}], "Text": "hi ☺"}' \\ + http://localhost:1080/webapi/v0/Send + { + "MessageID": "", + "Submissions": [ + { + "Address": "mox@localhost", + "QueueMsgID": 10010, + "FromID": "ZfV16EATHwKEufrSMo055Q" + } + ] + } + + +Send a message with files both from form upload and base64 included in JSON: + + \$ curl --user mox@localhost:moxmoxmox \\ + --form request='{"To": [{"Address": "mox@localhost"}], "Subject": "hello", "Text": "hi ☺", "HTML": "", "AttachedFiles": [{"Name": "img.png", "ContentType": "image/png", "Data": "bWFkZSB5b3UgbG9vayE="}]}' \\ + --form 'inlinefile=@hi.png;headers="Content-ID: "' \\ + --form attachedfile=@mox.png \\ + http://localhost:1080/webapi/v0/Send + { + "MessageID": "", + "Submissions": [ + { + "Address": "mox@localhost", + "QueueMsgID": 10011, + "FromID": "yWiUQ6mvJND8FRPSmc9y5A" + } + ] + } + +Get a message in parsed form: + + \$ curl --user mox@localhost:moxmoxmox --data request='{"MsgID": 424}' http://localhost:1080/webapi/v0/MessageGet + { + "Message": { + "From": [ + { + "Name": "mox", + "Address": "mox@localhost" + } + ], + "To": [ + { + "Name": "", + "Address": "mox@localhost" + } + ], + "CC": [], + "BCC": [], + "ReplyTo": [], + "MessageID": "<84vCeme_yZXyDzjWDeYBpg@localhost>", + "References": [], + "Date": "2024-04-04T14:29:42+02:00", + "Subject": "hello", + "Text": "hi \u263a\n", + "HTML": "" + }, + "Structure": { + "ContentType": "multipart/mixed", + "ContentTypeParams": { + "boundary": "0ee72dc30dbab2ca6f7a363844a10a9f6111fc6dd31b8ff0b261478c2c48" + }, + "ContentID": "", + "DecodedSize": 0, + "Parts": [ + { + "ContentType": "multipart/related", + "ContentTypeParams": { + "boundary": "b5ed0977ee2b628040f394c3f374012458379a4f3fcda5036371d761c81d" + }, + "ContentID": "", + "DecodedSize": 0, + "Parts": [ + { + "ContentType": "multipart/alternative", + "ContentTypeParams": { + "boundary": "3759771adede7bd191ef37f2aa0e49ff67369f4000c320f198a875e96487" + }, + "ContentID": "", + "DecodedSize": 0, + "Parts": [ + { + "ContentType": "text/plain", + "ContentTypeParams": { + "charset": "utf-8" + }, + "ContentID": "", + "DecodedSize": 8, + "Parts": [] + }, + { + "ContentType": "text/html", + "ContentTypeParams": { + "charset": "us-ascii" + }, + "ContentID": "", + "DecodedSize": 22, + "Parts": [] + } + ] + }, + { + "ContentType": "image/png", + "ContentTypeParams": {}, + "ContentID": "", + "DecodedSize": 19375, + "Parts": [] + } + ] + }, + { + "ContentType": "image/png", + "ContentTypeParams": {}, + "ContentID": "", + "DecodedSize": 14, + "Parts": [] + }, + { + "ContentType": "image/png", + "ContentTypeParams": {}, + "ContentID": "", + "DecodedSize": 7766, + "Parts": [] + } + ] + }, + "Meta": { + "Size": 38946, + "DSN": false, + "Flags": [ + "\$notjunk", + "\\seen" + ], + "MailFrom": "", + "MailFromValidated": false, + "MsgFrom": "", + "MsgFromValidated": false, + "DKIMVerifiedDomains": [], + "RemoteIP": "", + "MailboxName": "Inbox" + } + } + +Errors (with a 400 bad request HTTP status response) include a human-readable +message and a code for programmatic use: + + \$ curl --user mox@localhost:moxmoxmox --data request='{"MsgID": 999}' http://localhost:1080/webapi/v0/MessageGet + { + "Code": "notFound", + "Message": "message not found" + } + +Get a raw, unparsed message, as bytes: + + \$ curl --user mox@localhost:moxmoxmox --data request='{"MsgID": 123}' http://localhost:1080/webapi/v0/MessageRawGet + [message as bytes in raw form] + +Mark a message as read: + + \$ curl --user mox@localhost:moxmoxmox --data request='{"MsgID": 424, "Flags": ["\\\\Seen", "custom"]}' http://localhost:1080/webapi/v0/MessageFlagsAdd + {} + +# Webhook examples + +A webhook is delivered by an HTTP POST, wich headers X-Mox-Webhook-ID and +X-Mox-Webhook-Attempt and a JSON body with the data. To simulate a webhook call +for incoming messages, use: + + curl -H 'X-Mox-Webhook-ID: 123' -H 'X-Mox-Webhook-Attempt: 1' --json '{...}' http://localhost/yourapp + +EOF + +for ex in $(./mox example | grep webhook); do + ./mox example $ex + echo +done + +cat < 0 { + r.Limit -= int64(n) + if r.Limit < 0 { + return 0, errLimit + } + } + return n, err +} diff --git a/webapi/webapi.go b/webapi/webapi.go new file mode 100644 index 0000000..16e1b82 --- /dev/null +++ b/webapi/webapi.go @@ -0,0 +1,260 @@ +package webapi + +import ( + "context" + "io" + "time" + + "github.com/mjl-/mox/webhook" +) + +// todo future: we can have text and html templates, let submitters reference them along with parameters, and compose the message bodies ourselves. +// todo future: generate api specs (e.g. openapi) for webapi +// todo future: consider deprecating some of the webapi in favor of jmap + +// Methods of the webapi. More methods may be added in the future. See [Client] +// for documentation. +type Methods interface { + Send(ctx context.Context, request SendRequest) (response SendResult, err error) + SuppressionList(ctx context.Context, request SuppressionListRequest) (response SuppressionListResult, err error) + SuppressionAdd(ctx context.Context, request SuppressionAddRequest) (response SuppressionAddResult, err error) + SuppressionRemove(ctx context.Context, request SuppressionRemoveRequest) (response SuppressionRemoveResult, err error) + SuppressionPresent(ctx context.Context, request SuppressionPresentRequest) (response SuppressionPresentResult, err error) + MessageGet(ctx context.Context, request MessageGetRequest) (response MessageGetResult, err error) + MessageRawGet(ctx context.Context, request MessageRawGetRequest) (response io.ReadCloser, err error) + MessagePartGet(ctx context.Context, request MessagePartGetRequest) (response io.ReadCloser, err error) + MessageDelete(ctx context.Context, request MessageDeleteRequest) (response MessageDeleteResult, err error) + MessageFlagsAdd(ctx context.Context, request MessageFlagsAddRequest) (response MessageFlagsAddResult, err error) + MessageFlagsRemove(ctx context.Context, request MessageFlagsRemoveRequest) (response MessageFlagsRemoveResult, err error) + MessageMove(ctx context.Context, request MessageMoveRequest) (response MessageMoveResult, err error) +} + +// Error indicates an API-related error. +type Error struct { + // For programmatic handling. Common values: "user" for generic error by user, + // "server" for a server-side processing error, "badAddress" for malformed email + // addresses. + Code string + + // Human readable error message. + Message string +} + +// Error returns the human-readable error message. +func (e Error) Error() string { + return e.Message +} + +type NameAddress struct { + Name string // Optional, human-readable "display name" of the addressee. + Address string // Required, email address. +} + +// Message is an email message, used both for outgoing submitted messages and +// incoming messages. +type Message struct { + // For sending, if empty, automatically filled based on authenticated user and + // account information. Outgoing messages are allowed maximum 1 From address, + // incoming messages can in theory have zero or multiple, but typically have just + // one. + From []NameAddress + + // To/Cc/Bcc message headers. Outgoing messages are sent to all these addresses. + // All are optional, but there should be at least one addressee. + To []NameAddress + CC []NameAddress + // For submissions, BCC addressees receive the message but are not added to the + // message headers. For incoming messages, this is typically empty. + BCC []NameAddress + + // Optional Reply-To header, where the recipient is asked to send replies to. + ReplyTo []NameAddress + + // Message-ID from message header, should be wrapped in <>'s. For outgoing + // messages, a unique message-id is generated if empty. + MessageID string + + // Optional. References to message-id's (including <>) of other messages, if this + // is a reply or forwarded message. References are from oldest (ancestor) to most + // recent message. For outgoing messages, if non-empty then In-Reply-To is set to + // the last element. + References []string + + // Optional, set to time of submission for outgoing messages if nil. + Date *time.Time + + // Subject header, optional. + Subject string + + // For outgoing messages, at least text or HTML must be non-empty. If both are + // present, a multipart/alternative part is created. Lines must be + // \n-separated, automatically replaced with \r\n when composing the message. + // For parsed, incoming messages, values are truncated to 1MB (1024*1024 bytes). + // Use MessagePartGet to retrieve the full part data. + Text string + HTML string +} + +// SendRequest submits a message to be delivered. +type SendRequest struct { + // Message with headers and contents to compose. Additional headers and files can + // be added too (see below, and the use of multipart/form-data requests). The + // fields of Message are included directly in SendRequest. Required. + Message + + // Metadata to associate with the delivery, through the queue, including webhooks + // about delivery events. Metadata can also be set with regular SMTP submission + // through message headers "X-Mox-Extra-: ". Current behaviour is as + // follows, but this may change: 1. Keys are canonicalized, each dash-separated + // word changed to start with a capital. 2. Keys cannot be duplicated. 3. These + // headers are not removed when delivering. + Extra map[string]string + + // Additional custom headers to include in outgoing message. Optional. + // Unless a User-Agent or X-Mailer header is present, a User-Agent is added. + Headers [][2]string + + // Inline files are added to the message and should be displayed by mail clients as + // part of the message contents. Inline files cause a part with content-type + // "multipart/related" to be added to the message. Optional. + InlineFiles []File + + // Attached files are added to the message and should be shown as files that can be + // saved. Attached files cause a part with content-type "multipart/mixed" to be + // added to the message. Optional. + AttachedFiles []File + + // If absent/null, regular TLS requirements apply (opportunistic TLS, DANE, + // MTA-STS). If true, the SMTP REQUIRETLS extension is required, enforcing verified + // TLS along the delivery path. If false, TLS requirements are relaxed and + // DANE/MTA-STS policies may be ignored to increase the odds of successful but + // insecure delivery. Optional. + RequireTLS *bool + + // If set, it should be a time in the future at which the first delivery attempt + // starts. Optional. + FutureRelease *time.Time + + // Whether to store outgoing message in designated Sent mailbox (if configured). + SaveSent bool +} + +type File struct { + Name string // Optional. + ContentType string // E.g. application/pdf or image/png, automatically detected if empty. + ContentID string // E.g. "", for use in html email with "cid:". Optional. + Data string // Base64-encoded contents of the file. Required. +} + +// MessageMeta is returned as part of MessageGet. +type MessageMeta struct { + Size int64 // Total size of raw message file. + DSN bool // Whether this message is a DSN. + Flags []string // Standard message flags like \seen, \answered, $forwarded, $junk, $nonjunk, and custom keywords. + MailFrom string // Address used during SMTP "MAIL FROM" command. + MailFromValidated bool // Whether SMTP MAIL FROM address was SPF-validated. + MsgFrom string // Address used in message "From" header. + MsgFromValidated bool // Whether address in message "From"-header was DMARC(-like) validated. + DKIMVerifiedDomains []string // Verified domains from DKIM-signature in message. Can be different domain than used in addresses. + RemoteIP string // Where the message was delivered from. + MailboxName string +} + +type SendResult struct { + MessageID string // "@", as added by submitter or automatically generated during submission. + Submissions []Submission // Messages submitted to queue for delivery. In order of To, CC, BCC fields in request. +} + +type Submission struct { + Address string // From original recipient (to/cc/bcc). + QueueMsgID int64 // Of message added to delivery queue, later webhook calls reference this same ID. + FromID string // Unique ID used during delivery, later webhook calls reference this same FromID. +} + +// Suppression is an address to which messages will not be delivered. Attempts to +// deliver or queue will result in an immediate permanent failure to deliver. +type Suppression struct { + ID int64 + Created time.Time `bstore:"default now"` + + // Suppression applies to this account only. + Account string `bstore:"nonzero,unique Account+BaseAddress"` + + // Unicode. Address with fictional simplified localpart: lowercase, dots removed + // (gmail), first token before any "-" or "+" (typical catchall separator). + BaseAddress string `bstore:"nonzero"` + + // Unicode. Address that caused this suppression. + OriginalAddress string `bstore:"nonzero"` + + Manual bool + Reason string +} + +type SuppressionListRequest struct{} +type SuppressionListResult struct { + Suppressions []Suppression // Current suppressed addresses for account. +} + +type SuppressionAddRequest struct { + EmailAddress string + Manual bool // Whether added manually or automatically. + Reason string // Free-form text. +} +type SuppressionAddResult struct{} + +type SuppressionRemoveRequest struct { + EmailAddress string +} +type SuppressionRemoveResult struct{} + +type SuppressionPresentRequest struct { + EmailAddress string +} +type SuppressionPresentResult struct { + Present bool +} + +type MessageGetRequest struct { + MsgID int64 +} +type MessageGetResult struct { + Message Message + Structure webhook.Structure // MIME structure. + Meta MessageMeta // Additional information about message and SMTP delivery. +} + +type MessageRawGetRequest struct { + MsgID int64 +} + +type MessagePartGetRequest struct { + MsgID int64 + + // Indexes into MIME parts, e.g. [0, 2] first dereferences the first element in a + // multipart message, then the 3rd part within that first element. + PartPath []int +} + +type MessageDeleteRequest struct { + MsgID int64 +} +type MessageDeleteResult struct{} + +type MessageFlagsAddRequest struct { + MsgID int64 + Flags []string // Standard message flags like \seen, \answered, $forwarded, $junk, $nonjunk, and custom keywords. +} +type MessageFlagsAddResult struct{} + +type MessageFlagsRemoveRequest struct { + MsgID int64 + Flags []string +} +type MessageFlagsRemoveResult struct{} + +type MessageMoveRequest struct { + MsgID int64 + DestMailboxName string // E.g. "Inbox", must already exist. +} +type MessageMoveResult struct{} diff --git a/webapisrv/server.go b/webapisrv/server.go new file mode 100644 index 0000000..6f74c63 --- /dev/null +++ b/webapisrv/server.go @@ -0,0 +1,1330 @@ +// Package webapisrv implements the server-side of the webapi. +package webapisrv + +// In a separate package from webapi, so webapi.Client can be used and imported +// without including all mox internals. Documentation for the functions is in +// ../webapi/client.go. + +import ( + "bytes" + "context" + cryptorand "crypto/rand" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + htmltemplate "html/template" + "io" + "log/slog" + "mime" + "mime/multipart" + "net/http" + "net/textproto" + "reflect" + "runtime/debug" + "slices" + "strings" + "time" + + "github.com/prometheus/client_golang/prometheus" + "github.com/prometheus/client_golang/prometheus/promauto" + + "github.com/mjl-/bstore" + + "github.com/mjl-/mox/dkim" + "github.com/mjl-/mox/dns" + "github.com/mjl-/mox/message" + "github.com/mjl-/mox/metrics" + "github.com/mjl-/mox/mlog" + "github.com/mjl-/mox/mox-" + "github.com/mjl-/mox/moxio" + "github.com/mjl-/mox/moxvar" + "github.com/mjl-/mox/queue" + "github.com/mjl-/mox/smtp" + "github.com/mjl-/mox/store" + "github.com/mjl-/mox/webapi" + "github.com/mjl-/mox/webauth" + "github.com/mjl-/mox/webhook" + "github.com/mjl-/mox/webops" +) + +var pkglog = mlog.New("webapi", nil) + +var ( + // Similar between ../webmail/webmail.go:/metricSubmission and ../smtpserver/server.go:/metricSubmission and ../webapisrv/server.go:/metricSubmission + metricSubmission = promauto.NewCounterVec( + prometheus.CounterOpts{ + Name: "mox_webapi_submission_total", + Help: "Webapi message submission results, known values (those ending with error are server errors): ok, badfrom, messagelimiterror, recipientlimiterror, queueerror, storesenterror.", + }, + []string{ + "result", + }, + ) + metricServerErrors = promauto.NewCounterVec( + prometheus.CounterOpts{ + Name: "mox_webapi_errors_total", + Help: "Webapi server errors, known values: dkimsign, submit.", + }, + []string{ + "error", + }, + ) + metricResults = promauto.NewCounterVec( + prometheus.CounterOpts{ + Name: "mox_webapi_results_total", + Help: "HTTP webapi results by method and result.", + }, + []string{"method", "result"}, // result: "badauth", "ok", or error code + ) + metricDuration = promauto.NewHistogramVec( + prometheus.HistogramOpts{ + Name: "mox_webapi_duration_seconds", + Help: "HTTP webhook call duration.", + Buckets: []float64{0.01, 0.05, 0.1, 0.5, 1, 5, 10, 20, 30}, + }, + []string{"method"}, + ) +) + +// We pass the request to the handler so the TLS info can be used for +// the Received header in submitted messages. Most API calls need just the +// account name. +type ctxKey string + +var requestInfoCtxKey ctxKey = "requestInfo" + +type requestInfo struct { + Log mlog.Log + LoginAddress string + Account *store.Account + Response http.ResponseWriter // For setting headers for non-JSON responses. + Request *http.Request // For Proto and TLS connection state during message submit. +} + +// todo: show a curl invocation on the method pages + +var docsMethodTemplate = htmltemplate.Must(htmltemplate.New("method").Parse(` + + + + Method {{ .Method }} - WebAPI - Mox + + + +

WebAPI - Method {{ .Method }}

+
+
+

Request JSON

+
+
+
+ + +
+
+{{ if .ReturnsBytes }} +

Method has a non-JSON response.

+{{ else }} +

Response JSON

+
+{{ end }} +
+
+ + + +`)) + +var docsIndex []byte + +func init() { + var methods []string + mt := reflect.TypeOf((*webapi.Methods)(nil)).Elem() + n := mt.NumMethod() + for i := 0; i < n; i++ { + methods = append(methods, mt.Method(i).Name) + } + docsIndexTmpl := htmltemplate.Must(htmltemplate.New("index").Parse(` + + + + + Webapi - Mox + + + +

Webapi and webhooks

+

The mox webapi is a simple HTTP/JSON-based API for sending messages and processing incoming messages.

+

Configure webhooks in mox to receive notifications about outgoing delivery event, and/or incoming deliveries of messages.

+

Documentation and examples:

+

{{ .WebapiDocsURL }}

+

Methods

+

The methods below are available in this version of mox. Follow a link for an example request/response JSON, and a button to make an API call.

+
    +{{ range $i, $method := .Methods }} +
  • {{ $method }}
  • +{{ end }} +
+ + +`)) + webapiDocsURL := "https://pkg.go.dev/github.com/mjl-/mox@" + moxvar.VersionBare + "/webapi/" + webhookDocsURL := "https://pkg.go.dev/github.com/mjl-/mox@" + moxvar.VersionBare + "/webhook/" + indexArgs := struct { + WebapiDocsURL string + WebhookDocsURL string + Methods []string + }{webapiDocsURL, webhookDocsURL, methods} + var b bytes.Buffer + err := docsIndexTmpl.Execute(&b, indexArgs) + if err != nil { + panic("executing api docs index template: " + err.Error()) + } + docsIndex = b.Bytes() +} + +// NewServer returns a new http.Handler for a webapi server. +func NewServer(maxMsgSize int64, path string, isForwarded bool) http.Handler { + return server{maxMsgSize, path, isForwarded} +} + +// server implements the webapi methods. +type server struct { + maxMsgSize int64 // Of outgoing messages. + path string // Path webapi is configured under, typically /webapi/, with methods at /webapi/v0/. + isForwarded bool // Whether incoming requests are reverse-proxied. Used for getting remote IPs for rate limiting. +} + +var _ webapi.Methods = server{} + +// ServeHTTP implements http.Handler. +func (s server) ServeHTTP(w http.ResponseWriter, r *http.Request) { + log := pkglog.WithContext(r.Context()) // Take cid from webserver. + + // Send requests to /webapi/ to /webapi/v0/. + if r.URL.Path == "/" { + if r.Method != "GET" { + http.Error(w, "405 - method not allow", http.StatusMethodNotAllowed) + return + } + http.Redirect(w, r, s.path+"v0/", http.StatusSeeOther) + return + } + // Serve short introduction and list to methods at /webapi/v0/. + if r.URL.Path == "/v0/" { + w.Header().Set("Content-Type", "text/html; charset=utf-8") + w.Write(docsIndex) + return + } + + // Anything else must be a method endpoint. + if !strings.HasPrefix(r.URL.Path, "/v0/") { + http.NotFound(w, r) + return + } + fn := r.URL.Path[len("/v0/"):] + log = log.With(slog.String("method", fn)) + rfn := reflect.ValueOf(s).MethodByName(fn) + var zero reflect.Value + if rfn == zero || rfn.Type().NumIn() != 2 || rfn.Type().NumOut() != 2 { + log.Debug("unknown webapi method") + http.NotFound(w, r) + return + } + + // GET on method returns an example request JSON, a button to call the method, + // which either fills a textarea with the response (in case of JSON) or posts to + // the URL letting the browser handle the response (e.g. raw message or part). + if r.Method == "GET" { + formatJSON := func(v any) (string, error) { + var b bytes.Buffer + enc := json.NewEncoder(&b) + enc.SetIndent("", "\t") + enc.SetEscapeHTML(false) + err := enc.Encode(v) + return string(b.String()), err + } + + req, err := formatJSON(mox.FillExample(nil, reflect.New(rfn.Type().In(1))).Interface()) + if err != nil { + log.Errorx("formatting request as json", err) + http.Error(w, "500 - internal server error - marshal request: "+err.Error(), http.StatusInternalServerError) + return + } + // todo: could check for io.ReadCloser, but we don't return other interfaces than that one. + returnsBytes := rfn.Type().Out(0).Kind() == reflect.Interface + var resp string + if !returnsBytes { + resp, err = formatJSON(mox.FillExample(nil, reflect.New(rfn.Type().Out(0))).Interface()) + if err != nil { + log.Errorx("formatting response as json", err) + http.Error(w, "500 - internal server error - marshal response: "+err.Error(), http.StatusInternalServerError) + return + } + } + args := struct { + Method string + Request string + Response string + ReturnsBytes bool + }{fn, req, resp, returnsBytes} + w.Header().Set("Content-Type", "text/html; charset=utf-8") + err = docsMethodTemplate.Execute(w, args) + log.Check(err, "executing webapi method template") + return + } else if r.Method != "POST" { + http.Error(w, "405 - method not allowed - use get or post", http.StatusMethodNotAllowed) + return + } + + // Account is available during call, but we close it before we start writing a + // response, to prevent slow readers from holding a reference for a long time. + var acc *store.Account + closeAccount := func() { + if acc != nil { + err := acc.Close() + log.Check(err, "closing account") + acc = nil + } + } + defer closeAccount() + + email, password, aok := r.BasicAuth() + if !aok { + metricResults.WithLabelValues(fn, "badauth").Inc() + log.Debug("missing http basic authentication credentials") + w.Header().Set("WWW-Authenticate", "Basic realm=webapi") + http.Error(w, "401 - unauthorized - use http basic auth with email address as username", http.StatusUnauthorized) + return + } + log = log.With(slog.String("username", email)) + + t0 := time.Now() + + // If remote IP/network resulted in too many authentication failures, refuse to serve. + remoteIP := webauth.RemoteIP(log, s.isForwarded, r) + if remoteIP == nil { + metricResults.WithLabelValues(fn, "internal").Inc() + log.Debug("cannot find remote ip for rate limiter") + http.Error(w, "500 - internal server error - cannot find remote ip", http.StatusInternalServerError) + return + } + if !mox.LimiterFailedAuth.CanAdd(remoteIP, t0, 1) { + metrics.AuthenticationRatelimitedInc("webapi") + log.Debug("refusing connection due to many auth failures", slog.Any("remoteip", remoteIP)) + http.Error(w, "429 - too many auth attempts", http.StatusTooManyRequests) + return + } + + writeError := func(err webapi.Error) { + closeAccount() + metricResults.WithLabelValues(fn, err.Code).Inc() + + if err.Code == "server" { + log.Errorx("webapi call result", err, slog.String("resultcode", err.Code)) + } else { + log.Infox("webapi call result", err, slog.String("resultcode", err.Code)) + } + + w.Header().Set("Content-Type", "application/json; charset=utf-8") + w.WriteHeader(http.StatusBadRequest) + enc := json.NewEncoder(w) + enc.SetEscapeHTML(false) + werr := enc.Encode(err) + if werr != nil && !moxio.IsClosed(werr) { + log.Infox("writing error response", werr) + } + } + + // Called for all successful JSON responses, not non-JSON responses. + writeResponse := func(resp any) { + closeAccount() + metricResults.WithLabelValues(fn, "ok").Inc() + log.Debug("webapi call result", slog.String("resultcode", "ok")) + w.Header().Set("Content-Type", "application/json; charset=utf-8") + enc := json.NewEncoder(w) + enc.SetEscapeHTML(false) + werr := enc.Encode(resp) + if werr != nil && !moxio.IsClosed(werr) { + log.Infox("writing error response", werr) + } + } + + authResult := "error" + defer func() { + metricDuration.WithLabelValues(fn).Observe(float64(time.Since(t0)) / float64(time.Second)) + metrics.AuthenticationInc("webapi", "httpbasic", authResult) + }() + + var err error + acc, err = store.OpenEmailAuth(log, email, password) + if err != nil { + mox.LimiterFailedAuth.Add(remoteIP, t0, 1) + if errors.Is(err, mox.ErrDomainNotFound) || errors.Is(err, mox.ErrAccountNotFound) || errors.Is(err, store.ErrUnknownCredentials) { + log.Debug("bad http basic authentication credentials") + metricResults.WithLabelValues(fn, "badauth").Inc() + authResult = "badcreds" + w.Header().Set("WWW-Authenticate", "Basic realm=webapi") + http.Error(w, "401 - unauthorized - use http basic auth with email address as username", http.StatusUnauthorized) + return + } + writeError(webapi.Error{Code: "server", Message: "error verifying credentials"}) + return + } + authResult = "ok" + mox.LimiterFailedAuth.Reset(remoteIP, t0) + + ct := r.Header.Get("Content-Type") + ct, _, err = mime.ParseMediaType(ct) + if err != nil { + writeError(webapi.Error{Code: "protocol", Message: "unknown content-type " + r.Header.Get("Content-Type")}) + return + } + if ct == "multipart/form-data" { + err = r.ParseMultipartForm(200 * 1024) + } else { + err = r.ParseForm() + } + if err != nil { + writeError(webapi.Error{Code: "protocol", Message: "parsing form: " + err.Error()}) + return + } + + reqstr := r.PostFormValue("request") + if reqstr == "" { + writeError(webapi.Error{Code: "protocol", Message: "missing/empty request"}) + return + } + + defer func() { + x := recover() + if x == nil { + return + } + if err, eok := x.(webapi.Error); eok { + writeError(err) + return + } + log.Error("unhandled panic in webapi call", slog.Any("x", x), slog.String("resultcode", "server")) + metrics.PanicInc(metrics.Webapi) + debug.PrintStack() + writeError(webapi.Error{Code: "server", Message: "unhandled error"}) + }() + req := reflect.New(rfn.Type().In(1)) + dec := json.NewDecoder(strings.NewReader(reqstr)) + dec.DisallowUnknownFields() + if err := dec.Decode(req.Interface()); err != nil { + writeError(webapi.Error{Code: "protocol", Message: fmt.Sprintf("parsing request: %s", err)}) + return + } + + reqInfo := requestInfo{log, email, acc, w, r} + nctx := context.WithValue(r.Context(), requestInfoCtxKey, reqInfo) + resp := rfn.Call([]reflect.Value{reflect.ValueOf(nctx), req.Elem()}) + if !resp[1].IsZero() { + var e webapi.Error + err := resp[1].Interface().(error) + if x, eok := err.(webapi.Error); eok { + e = x + } else { + e = webapi.Error{Code: "error", Message: err.Error()} + } + writeError(e) + return + } + rc, ok := resp[0].Interface().(io.ReadCloser) + if !ok { + rv, _ := mox.FillNil(resp[0]) + writeResponse(rv.Interface()) + return + } + closeAccount() + log.Debug("webapi call result", slog.String("resultcode", "ok")) + metricResults.WithLabelValues(fn, "ok").Inc() + defer rc.Close() + if _, err := io.Copy(w, rc); err != nil && !moxio.IsClosed(err) { + log.Errorx("writing response to client", err) + } +} + +func xcheckf(err error, format string, args ...any) { + if err != nil { + msg := fmt.Sprintf(format, args...) + panic(webapi.Error{Code: "server", Message: fmt.Sprintf("%s: %s", msg, err)}) + } +} + +func xcheckuserf(err error, format string, args ...any) { + if err != nil { + msg := fmt.Sprintf(format, args...) + panic(webapi.Error{Code: "user", Message: fmt.Sprintf("%s: %s", msg, err)}) + } +} + +func xdbwrite(ctx context.Context, acc *store.Account, fn func(tx *bstore.Tx)) { + err := acc.DB.Write(ctx, func(tx *bstore.Tx) error { + fn(tx) + return nil + }) + xcheckf(err, "transaction") +} + +func xdbread(ctx context.Context, acc *store.Account, fn func(tx *bstore.Tx)) { + err := acc.DB.Read(ctx, func(tx *bstore.Tx) error { + fn(tx) + return nil + }) + xcheckf(err, "transaction") +} + +func xcheckcontrol(s string) { + for _, c := range s { + if c < 0x20 { + xcheckuserf(errors.New("control characters not allowed"), "checking header values") + } + } +} + +func xparseAddress(addr string) smtp.Address { + a, err := smtp.ParseAddress(addr) + if err != nil { + panic(webapi.Error{Code: "badAddress", Message: fmt.Sprintf("parsing address %q: %s", addr, err)}) + } + return a +} + +func xparseAddresses(l []webapi.NameAddress) ([]message.NameAddress, []smtp.Path) { + r := make([]message.NameAddress, len(l)) + paths := make([]smtp.Path, len(l)) + for i, a := range l { + xcheckcontrol(a.Name) + addr := xparseAddress(a.Address) + r[i] = message.NameAddress{DisplayName: a.Name, Address: addr} + paths[i] = addr.Path() + } + return r, paths +} + +func xrandomID(n int) string { + return base64.RawURLEncoding.EncodeToString(xrandom(n)) +} + +func xrandom(n int) []byte { + buf := make([]byte, n) + x, err := cryptorand.Read(buf) + if err != nil { + panic("read random") + } else if x != n { + panic("short random read") + } + return buf +} + +func (s server) Send(ctx context.Context, req webapi.SendRequest) (resp webapi.SendResult, err error) { + // Similar between ../smtpserver/server.go:/submit\( and ../webmail/api.go:/MessageSubmit\( and ../webapisrv/server.go:/Send\( + + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + log := reqInfo.Log + acc := reqInfo.Account + + m := req.Message + + accConf, _ := acc.Conf() + + if m.Text == "" && m.HTML == "" { + return resp, webapi.Error{Code: "missingBody", Message: "at least text or html body required"} + } + + if len(m.From) == 0 { + m.From = []webapi.NameAddress{{Name: accConf.FullName, Address: reqInfo.LoginAddress}} + } else if len(m.From) > 1 { + return resp, webapi.Error{Code: "multipleFrom", Message: "multiple from-addresses not allowed"} + } + froms, fromPaths := xparseAddresses(m.From) + from, fromPath := froms[0], fromPaths[0] + to, toPaths := xparseAddresses(m.To) + cc, ccPaths := xparseAddresses(m.CC) + _, bccPaths := xparseAddresses(m.BCC) + + recipients := append(append(toPaths, ccPaths...), bccPaths...) + addresses := append(append(m.To, m.CC...), m.BCC...) + + // Check if from address is allowed for account. + fromAccName, _, _, err := mox.FindAccount(from.Address.Localpart, from.Address.Domain, false) + if err == nil && fromAccName != acc.Name { + err = mox.ErrAccountNotFound + } + if err != nil && (errors.Is(err, mox.ErrAccountNotFound) || errors.Is(err, mox.ErrDomainNotFound)) { + metricSubmission.WithLabelValues("badfrom").Inc() + return resp, webapi.Error{Code: "badFrom", Message: "from-address not configured for account"} + } + xcheckf(err, "checking if from address is allowed") + + if len(recipients) == 0 { + return resp, webapi.Error{Code: "noRecipients", Message: "no recipients"} + } + + // Check outgoing message rate limit. + xdbread(ctx, acc, func(tx *bstore.Tx) { + msglimit, rcptlimit, err := acc.SendLimitReached(tx, recipients) + if msglimit >= 0 { + metricSubmission.WithLabelValues("messagelimiterror").Inc() + panic(webapi.Error{Code: "messageLimitReached", Message: "outgoing message rate limit reached"}) + } else if rcptlimit >= 0 { + metricSubmission.WithLabelValues("recipientlimiterror").Inc() + panic(webapi.Error{Code: "recipientLimitReached", Message: "outgoing new recipient rate limit reached"}) + } + xcheckf(err, "checking send limit") + }) + + // If we have a non-ascii localpart, we will be sending with smtputf8. We'll go + // full utf-8 then. + intl := func(l []smtp.Path) bool { + for _, p := range l { + if p.Localpart.IsInternational() { + return true + } + } + return false + } + smtputf8 := intl([]smtp.Path{fromPath}) || intl(toPaths) || intl(ccPaths) || intl(bccPaths) + + replyTos, replyToPaths := xparseAddresses(m.ReplyTo) + for _, rt := range replyToPaths { + if rt.Localpart.IsInternational() { + smtputf8 = true + } + } + + // Create file to compose message into. + dataFile, err := store.CreateMessageTemp(log, "webapi-submit") + xcheckf(err, "creating temporary file for message") + defer store.CloseRemoveTempFile(log, dataFile, "message to submit") + + // If writing to the message file fails, we abort immediately. + xc := message.NewComposer(dataFile, s.maxMsgSize, smtputf8) + defer func() { + x := recover() + if x == nil { + return + } + if err, ok := x.(error); ok && errors.Is(err, message.ErrMessageSize) { + panic(webapi.Error{Code: "messageTooLarge", Message: "message too large"}) + } else if ok && errors.Is(err, message.ErrCompose) { + xcheckf(err, "making message") + } + panic(x) + }() + + // Each queued message gets a Received header. + // We cannot use VIA, because there is no registered method. We would like to use + // it to add the ascii domain name in case of smtputf8 and IDNA host name. + // We don't add the IP address of the submitter. Exposing likely not desirable. + recvFrom := message.HeaderCommentDomain(mox.Conf.Static.HostnameDomain, smtputf8) + recvBy := mox.Conf.Static.HostnameDomain.XName(smtputf8) + recvID := mox.ReceivedID(mox.CidFromCtx(ctx)) + recvHdrFor := func(rcptTo string) string { + recvHdr := &message.HeaderWriter{} + // For additional Received-header clauses, see: + // https://www.iana.org/assignments/mail-parameters/mail-parameters.xhtml#table-mail-parameters-8 + // Note: we don't have "via" or "with", there is no registered for webmail. + recvHdr.Add(" ", "Received:", "from", recvFrom, "by", recvBy, "id", recvID) // ../rfc/5321:3158 + if reqInfo.Request.TLS != nil { + recvHdr.Add(" ", mox.TLSReceivedComment(log, *reqInfo.Request.TLS)...) + } + recvHdr.Add(" ", "for", "<"+rcptTo+">;", time.Now().Format(message.RFC5322Z)) + return recvHdr.String() + } + + // Outer message headers. + xc.HeaderAddrs("From", []message.NameAddress{from}) + if len(replyTos) > 0 { + xc.HeaderAddrs("Reply-To", replyTos) + } + xc.HeaderAddrs("To", to) + xc.HeaderAddrs("Cc", cc) + if m.Subject != "" { + xcheckcontrol(m.Subject) + xc.Subject(m.Subject) + } + + var date time.Time + if m.Date != nil { + date = *m.Date + } else { + date = time.Now() + } + xc.Header("Date", date.Format(message.RFC5322Z)) + + if m.MessageID == "" { + m.MessageID = fmt.Sprintf("<%s>", mox.MessageIDGen(smtputf8)) + } else if !strings.HasPrefix(m.MessageID, "<") || !strings.HasSuffix(m.MessageID, ">") { + return resp, webapi.Error{Code: "malformedMessageID", Message: "missing <> in message-id"} + } + xcheckcontrol(m.MessageID) + xc.Header("Message-Id", m.MessageID) + + if len(m.References) > 0 { + for _, ref := range m.References { + xcheckcontrol(ref) + // We don't check for <>'s. If caller just puts in what they got, we don't want to + // reject the message. + } + xc.Header("References", strings.Join(m.References, "\r\n\t")) + xc.Header("In-Reply-To", m.References[len(m.References)-1]) + } + xc.Header("MIME-Version", "1.0") + + var haveUserAgent bool + for _, kv := range req.Headers { + xcheckcontrol(kv[0]) + xcheckcontrol(kv[1]) + xc.Header(kv[0], kv[1]) + if strings.EqualFold(kv[0], "User-Agent") || strings.EqualFold(kv[0], "X-Mailer") { + haveUserAgent = true + } + } + if !haveUserAgent { + xc.Header("User-Agent", "mox/"+moxvar.Version) + } + + // Whether we have additional separately inline/attached file(s). + mpf := reqInfo.Request.MultipartForm + formInline := mpf != nil && len(mpf.File["inlinefile"]) > 0 + formAttachment := mpf != nil && len(mpf.File["attachedfile"]) > 0 + + // MIME structure we'll build: + // - multipart/mixed (in case of attached files) + // - multipart/related (in case of inline files, we assume they are relevant both text and html part if present) + // - multipart/alternative (in case we have both text and html bodies) + // - text/plain (optional) + // - text/html (optional) + // - inline file, ... + // - attached file, ... + + // We keep track of cur, which is where we add new parts to, whether the text or + // html part, or the inline or attached files. + var cur, mixed, related, alternative *multipart.Writer + xcreateMultipart := func(subtype string) *multipart.Writer { + mp := multipart.NewWriter(xc) + if cur == nil { + xc.Header("Content-Type", fmt.Sprintf(`multipart/%s; boundary="%s"`, subtype, mp.Boundary())) + xc.Line() + } else { + _, err := cur.CreatePart(textproto.MIMEHeader{"Content-Type": []string{fmt.Sprintf(`multipart/%s; boundary="%s"`, subtype, mp.Boundary())}}) + xcheckf(err, "adding multipart") + } + return mp + } + xcreatePart := func(header textproto.MIMEHeader) io.Writer { + if cur == nil { + for k, vl := range header { + for _, v := range vl { + xc.Header(k, v) + } + } + xc.Line() + return xc + } + p, err := cur.CreatePart(header) + xcheckf(err, "adding part") + return p + } + // We create multiparts from outer structure to inner. Then for each we add its + // inner parts and close the multipart. + if len(req.AttachedFiles) > 0 || formAttachment { + mixed = xcreateMultipart("mixed") + cur = mixed + } + if len(req.InlineFiles) > 0 || formInline { + related = xcreateMultipart("related") + cur = related + } + if m.Text != "" && m.HTML != "" { + alternative = xcreateMultipart("alternative") + cur = alternative + } + if m.Text != "" { + textBody, ct, cte := xc.TextPart("plain", m.Text) + tp := xcreatePart(textproto.MIMEHeader{"Content-Type": []string{ct}, "Content-Transfer-Encoding": []string{cte}}) + _, err := tp.Write([]byte(textBody)) + xcheckf(err, "write text part") + } + if m.HTML != "" { + htmlBody, ct, cte := xc.TextPart("html", m.HTML) + tp := xcreatePart(textproto.MIMEHeader{"Content-Type": []string{ct}, "Content-Transfer-Encoding": []string{cte}}) + _, err := tp.Write([]byte(htmlBody)) + xcheckf(err, "write html part") + } + if alternative != nil { + alternative.Close() + alternative = nil + } + + xaddFileBase64 := func(ct string, inline bool, filename string, cid string, base64Data string) { + h := textproto.MIMEHeader{} + disp := "attachment" + if inline { + disp = "inline" + } + cd := mime.FormatMediaType(disp, map[string]string{"filename": filename}) + + h.Set("Content-Type", ct) + h.Set("Content-Disposition", cd) + if cid != "" { + h.Set("Content-ID", cid) + } + h.Set("Content-Transfer-Encoding", "base64") + p := xcreatePart(h) + + for len(base64Data) > 0 { + line := base64Data + n := len(line) + if n > 78 { + n = 78 + } + line, base64Data = base64Data[:n], base64Data[n:] + _, err := p.Write([]byte(line)) + xcheckf(err, "writing attachment") + _, err = p.Write([]byte("\r\n")) + xcheckf(err, "writing attachment") + } + } + xaddJSONFiles := func(l []webapi.File, inline bool) { + for _, f := range l { + if f.ContentType == "" { + buf, _ := io.ReadAll(io.LimitReader(base64.NewDecoder(base64.StdEncoding, strings.NewReader(f.Data)), 512)) + f.ContentType = http.DetectContentType(buf) + if f.ContentType == "application/octet-stream" { + f.ContentType = "" + } + } + + // Ensure base64 is valid, then we'll write the original string. + _, err := io.Copy(io.Discard, base64.NewDecoder(base64.StdEncoding, strings.NewReader(f.Data))) + xcheckuserf(err, "parsing attachment as base64") + + xaddFileBase64(f.ContentType, inline, f.Name, f.ContentID, f.Data) + } + } + xaddFile := func(fh *multipart.FileHeader, inline bool) { + f, err := fh.Open() + xcheckf(err, "open uploaded file") + defer func() { + err := f.Close() + log.Check(err, "closing uploaded file") + }() + + ct := fh.Header.Get("Content-Type") + if ct == "" { + buf, err := io.ReadAll(io.LimitReader(f, 512)) + if err == nil { + ct = http.DetectContentType(buf) + } + _, err = f.Seek(0, 0) + xcheckf(err, "rewind uploaded file after content-detection") + if ct == "application/octet-stream" { + ct = "" + } + } + + h := textproto.MIMEHeader{} + disp := "attachment" + if inline { + disp = "inline" + } + cd := mime.FormatMediaType(disp, map[string]string{"filename": fh.Filename}) + + if ct != "" { + h.Set("Content-Type", ct) + } + h.Set("Content-Disposition", cd) + cid := fh.Header.Get("Content-ID") + if cid != "" { + h.Set("Content-ID", cid) + } + h.Set("Content-Transfer-Encoding", "base64") + p := xcreatePart(h) + bw := moxio.Base64Writer(p) + _, err = io.Copy(bw, f) + xcheckf(err, "adding uploaded file") + err = bw.Close() + xcheckf(err, "flushing uploaded file") + } + + cur = related + xaddJSONFiles(req.InlineFiles, true) + if mpf != nil { + for _, fh := range mpf.File["inlinefile"] { + xaddFile(fh, true) + } + } + if related != nil { + related.Close() + related = nil + } + cur = mixed + xaddJSONFiles(req.AttachedFiles, false) + if mpf != nil { + for _, fh := range mpf.File["attachedfile"] { + xaddFile(fh, false) + } + } + if mixed != nil { + mixed.Close() + mixed = nil + } + cur = nil + xc.Flush() + + // Add DKIM-Signature headers. + var msgPrefix string + fd := from.Address.Domain + confDom, _ := mox.Conf.Domain(fd) + selectors := mox.DKIMSelectors(confDom.DKIM) + if len(selectors) > 0 { + dkimHeaders, err := dkim.Sign(ctx, log.Logger, from.Address.Localpart, fd, selectors, smtputf8, dataFile) + if err != nil { + metricServerErrors.WithLabelValues("dkimsign").Inc() + } + xcheckf(err, "sign dkim") + + msgPrefix = dkimHeaders + } + + loginAddr, err := smtp.ParseAddress(reqInfo.LoginAddress) + xcheckf(err, "parsing login address") + useFromID := slices.Contains(accConf.ParsedFromIDLoginAddresses, loginAddr) + var localpartBase string + if useFromID { + if confDom.LocalpartCatchallSeparator == "" { + xcheckuserf(errors.New(`localpart catchall separator must be configured for domain`), `composing unique "from" address`) + } + localpartBase = strings.SplitN(string(fromPath.Localpart), confDom.LocalpartCatchallSeparator, 2)[0] + } + fromIDs := make([]string, len(recipients)) + qml := make([]queue.Msg, len(recipients)) + now := time.Now() + for i, rcpt := range recipients { + fp := fromPath + if useFromID { + fromIDs[i] = xrandomID(16) + fp.Localpart = smtp.Localpart(localpartBase + confDom.LocalpartCatchallSeparator + fromIDs[i]) + } + + // Don't use per-recipient unique message prefix when multiple recipients are + // present, we want to keep the message identical. + var recvRcpt string + if len(recipients) == 1 { + recvRcpt = rcpt.XString(smtputf8) + } + rcptMsgPrefix := recvHdrFor(recvRcpt) + msgPrefix + msgSize := int64(len(rcptMsgPrefix)) + xc.Size + qm := queue.MakeMsg(fp, rcpt, xc.Has8bit, xc.SMTPUTF8, msgSize, m.MessageID, []byte(rcptMsgPrefix), req.RequireTLS, now, m.Subject) + qm.FromID = fromIDs[i] + qm.Extra = req.Extra + if req.FutureRelease != nil { + ival := time.Until(*req.FutureRelease) + if ival > queue.FutureReleaseIntervalMax { + xcheckuserf(fmt.Errorf("date/time can not be further than %v in the future", queue.FutureReleaseIntervalMax), "scheduling delivery") + } + qm.NextAttempt = *req.FutureRelease + qm.FutureReleaseRequest = "until;" + req.FutureRelease.Format(time.RFC3339) + // todo: possibly add a header to the message stored in the Sent mailbox to indicate it was scheduled for later delivery. + } + qml[i] = qm + } + err = queue.Add(ctx, log, acc.Name, dataFile, qml...) + if err != nil { + metricSubmission.WithLabelValues("queueerror").Inc() + } + xcheckf(err, "adding messages to the delivery queue") + metricSubmission.WithLabelValues("ok").Inc() + + if req.SaveSent { + // Append message to Sent mailbox and mark original messages as answered/forwarded. + acc.WithRLock(func() { + var changes []store.Change + + metricked := false + defer func() { + if x := recover(); x != nil { + if !metricked { + metricServerErrors.WithLabelValues("submit").Inc() + } + panic(x) + } + }() + xdbwrite(ctx, reqInfo.Account, func(tx *bstore.Tx) { + sentmb, err := bstore.QueryTx[store.Mailbox](tx).FilterEqual("Sent", true).Get() + if err == bstore.ErrAbsent { + // There is no mailbox designated as Sent mailbox, so we're done. + return + } + xcheckf(err, "message submitted to queue, adding to Sent mailbox") + + modseq, err := acc.NextModSeq(tx) + xcheckf(err, "next modseq") + + sentm := store.Message{ + CreateSeq: modseq, + ModSeq: modseq, + MailboxID: sentmb.ID, + MailboxOrigID: sentmb.ID, + Flags: store.Flags{Notjunk: true, Seen: true}, + Size: int64(len(msgPrefix)) + xc.Size, + MsgPrefix: []byte(msgPrefix), + } + + if ok, maxSize, err := acc.CanAddMessageSize(tx, sentm.Size); err != nil { + xcheckf(err, "checking quota") + } else if !ok { + panic(webapi.Error{Code: "sentOverQuota", Message: fmt.Sprintf("message was sent, but not stored in sent mailbox due to quota of total %d bytes reached", maxSize)}) + } + + // Update mailbox before delivery, which changes uidnext. + sentmb.Add(sentm.MailboxCounts()) + err = tx.Update(&sentmb) + xcheckf(err, "updating sent mailbox for counts") + + err = acc.DeliverMessage(log, tx, &sentm, dataFile, true, false, false, true) + if err != nil { + metricSubmission.WithLabelValues("storesenterror").Inc() + metricked = true + } + xcheckf(err, "message submitted to queue, appending message to Sent mailbox") + + changes = append(changes, sentm.ChangeAddUID(), sentmb.ChangeCounts()) + }) + + store.BroadcastChanges(acc, changes) + }) + } + + submissions := make([]webapi.Submission, len(qml)) + for i, qm := range qml { + submissions[i] = webapi.Submission{ + Address: addresses[i].Address, + QueueMsgID: qm.ID, + FromID: fromIDs[i], + } + } + resp = webapi.SendResult{ + MessageID: m.MessageID, + Submissions: submissions, + } + return resp, nil +} + +func (s server) SuppressionList(ctx context.Context, req webapi.SuppressionListRequest) (resp webapi.SuppressionListResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + resp.Suppressions, err = queue.SuppressionList(ctx, reqInfo.Account.Name) + return +} + +func (s server) SuppressionAdd(ctx context.Context, req webapi.SuppressionAddRequest) (resp webapi.SuppressionAddResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + addr := xparseAddress(req.EmailAddress) + sup := webapi.Suppression{ + Account: reqInfo.Account.Name, + Manual: req.Manual, + Reason: req.Reason, + } + err = queue.SuppressionAdd(ctx, addr.Path(), &sup) + return resp, err +} + +func (s server) SuppressionRemove(ctx context.Context, req webapi.SuppressionRemoveRequest) (resp webapi.SuppressionRemoveResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + addr := xparseAddress(req.EmailAddress) + err = queue.SuppressionRemove(ctx, reqInfo.Account.Name, addr.Path()) + return resp, err +} + +func (s server) SuppressionPresent(ctx context.Context, req webapi.SuppressionPresentRequest) (resp webapi.SuppressionPresentResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + addr := xparseAddress(req.EmailAddress) + xcheckuserf(err, "parsing address %q", req.EmailAddress) + sup, err := queue.SuppressionLookup(ctx, reqInfo.Account.Name, addr.Path()) + if sup != nil { + resp.Present = true + } + return resp, err +} + +func xwebapiAddresses(l []message.Address) (r []webapi.NameAddress) { + r = make([]webapi.NameAddress, len(l)) + for i, ma := range l { + dom, err := dns.ParseDomain(ma.Host) + xcheckf(err, "parsing host %q for address", ma.Host) + lp, err := smtp.ParseLocalpart(ma.User) + xcheckf(err, "parsing localpart %q for address", ma.User) + path := smtp.Path{Localpart: lp, IPDomain: dns.IPDomain{Domain: dom}} + r[i] = webapi.NameAddress{Name: ma.Name, Address: path.XString(true)} + } + return r +} + +// caller should hold account lock. +func xmessageGet(ctx context.Context, acc *store.Account, msgID int64) (store.Message, store.Mailbox) { + m := store.Message{ID: msgID} + var mb store.Mailbox + err := acc.DB.Read(ctx, func(tx *bstore.Tx) error { + if err := tx.Get(&m); err == bstore.ErrAbsent || err == nil && m.Expunged { + panic(webapi.Error{Code: "messageNotFound", Message: "message not found"}) + } + mb = store.Mailbox{ID: m.MailboxID} + return tx.Get(&mb) + }) + xcheckf(err, "get message") + return m, mb +} + +func (s server) MessageGet(ctx context.Context, req webapi.MessageGetRequest) (resp webapi.MessageGetResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + log := reqInfo.Log + acc := reqInfo.Account + + var m store.Message + var mb store.Mailbox + var msgr *store.MsgReader + acc.WithRLock(func() { + m, mb = xmessageGet(ctx, acc, req.MsgID) + msgr = acc.MessageReader(m) + }) + defer func() { + if err != nil { + msgr.Close() + } + }() + + p, err := m.LoadPart(msgr) + xcheckf(err, "load parsed message") + + var env message.Envelope + if p.Envelope != nil { + env = *p.Envelope + } + text, html, _, err := webops.ReadableParts(p, 1*1024*1024) + if err != nil { + log.Debugx("looking for text and html content in message", err) + } + date := &env.Date + if date.IsZero() { + date = nil + } + + // Parse References message header. + h, err := p.Header() + if err != nil { + log.Debugx("parsing headers for References", err) + } + var refs []string + for _, s := range h.Values("References") { + s = strings.ReplaceAll(s, "\t", " ") + for _, w := range strings.Split(s, " ") { + if w != "" { + refs = append(refs, w) + } + } + } + if env.InReplyTo != "" && !slices.Contains(refs, env.InReplyTo) { + // References are ordered, most recent first. In-Reply-To is less powerful/older. + // So if both are present, give References preference, prepending the In-Reply-To + // header. + refs = append([]string{env.InReplyTo}, refs...) + } + + msg := webapi.Message{ + From: xwebapiAddresses(env.From), + To: xwebapiAddresses(env.To), + CC: xwebapiAddresses(env.CC), + BCC: xwebapiAddresses(env.BCC), + ReplyTo: xwebapiAddresses(env.ReplyTo), + MessageID: env.MessageID, + References: refs, + Date: date, + Subject: env.Subject, + Text: strings.ReplaceAll(text, "\r\n", "\n"), + HTML: strings.ReplaceAll(html, "\r\n", "\n"), + } + + var msgFrom string + if d, err := dns.ParseDomain(m.MsgFromDomain); err == nil { + msgFrom = smtp.Address{Localpart: m.MsgFromLocalpart, Domain: d}.Pack(true) + } + meta := webapi.MessageMeta{ + Size: m.Size, + DSN: m.DSN, + Flags: append(m.Flags.Strings(), m.Keywords...), + MailFrom: m.MailFrom, + MailFromValidated: m.MailFromValidated, + MsgFrom: msgFrom, + MsgFromValidated: m.MsgFromValidated, + DKIMVerifiedDomains: m.DKIMDomains, + RemoteIP: m.RemoteIP, + MailboxName: mb.Name, + } + + result := webapi.MessageGetResult{ + Message: msg, + Structure: webhook.PartStructure(&p), + Meta: meta, + } + return result, nil +} + +func (s server) MessageRawGet(ctx context.Context, req webapi.MessageRawGetRequest) (resp io.ReadCloser, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + acc := reqInfo.Account + + var m store.Message + var msgr *store.MsgReader + acc.WithRLock(func() { + m, _ = xmessageGet(ctx, acc, req.MsgID) + msgr = acc.MessageReader(m) + }) + + reqInfo.Response.Header().Set("Content-Type", "text/plain") + return msgr, nil +} + +func (s server) MessagePartGet(ctx context.Context, req webapi.MessagePartGetRequest) (resp io.ReadCloser, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + acc := reqInfo.Account + + var m store.Message + var msgr *store.MsgReader + acc.WithRLock(func() { + m, _ = xmessageGet(ctx, acc, req.MsgID) + msgr = acc.MessageReader(m) + }) + defer func() { + if err != nil { + msgr.Close() + } + }() + + p, err := m.LoadPart(msgr) + xcheckf(err, "load parsed message") + + for i, index := range req.PartPath { + if index < 0 || index >= len(p.Parts) { + return nil, webapi.Error{Code: "partNotFound", Message: fmt.Sprintf("part %d at index %d not found", index, i)} + } + p = p.Parts[index] + } + return struct { + io.Reader + io.Closer + }{Reader: p.Reader(), Closer: msgr}, nil +} + +var xops = webops.XOps{ + DBWrite: xdbwrite, + Checkf: func(ctx context.Context, err error, format string, args ...any) { + xcheckf(err, format, args...) + }, + Checkuserf: func(ctx context.Context, err error, format string, args ...any) { + if err != nil && errors.Is(err, webops.ErrMessageNotFound) { + msg := fmt.Sprintf("%s: %s", fmt.Sprintf(format, args...), err) + panic(webapi.Error{Code: "messageNotFound", Message: msg}) + } + xcheckuserf(err, format, args...) + }, +} + +func (s server) MessageDelete(ctx context.Context, req webapi.MessageDeleteRequest) (resp webapi.MessageDeleteResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + xops.MessageDelete(ctx, reqInfo.Log, reqInfo.Account, []int64{req.MsgID}) + return +} + +func (s server) MessageFlagsAdd(ctx context.Context, req webapi.MessageFlagsAddRequest) (resp webapi.MessageFlagsAddResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + xops.MessageFlagsAdd(ctx, reqInfo.Log, reqInfo.Account, []int64{req.MsgID}, req.Flags) + return +} + +func (s server) MessageFlagsRemove(ctx context.Context, req webapi.MessageFlagsRemoveRequest) (resp webapi.MessageFlagsRemoveResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + xops.MessageFlagsClear(ctx, reqInfo.Log, reqInfo.Account, []int64{req.MsgID}, req.Flags) + return +} + +func (s server) MessageMove(ctx context.Context, req webapi.MessageMoveRequest) (resp webapi.MessageMoveResult, err error) { + reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo) + xops.MessageMove(ctx, reqInfo.Log, reqInfo.Account, []int64{req.MsgID}, req.DestMailboxName, 0) + return +} diff --git a/webapisrv/server_test.go b/webapisrv/server_test.go new file mode 100644 index 0000000..8e7ab36 --- /dev/null +++ b/webapisrv/server_test.go @@ -0,0 +1,491 @@ +package webapisrv + +import ( + "bytes" + "context" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "mime/multipart" + "net/http" + "net/http/httptest" + "net/textproto" + "os" + "path/filepath" + "reflect" + "slices" + "strings" + "testing" + "time" + + "github.com/mjl-/mox/message" + "github.com/mjl-/mox/mlog" + "github.com/mjl-/mox/mox-" + "github.com/mjl-/mox/queue" + "github.com/mjl-/mox/store" + "github.com/mjl-/mox/webapi" + "github.com/mjl-/mox/webhook" +) + +var ctxbg = context.Background() + +func tcheckf(t *testing.T, err error, format string, args ...any) { + t.Helper() + if err != nil { + t.Fatalf("%s: %s", fmt.Sprintf(format, args...), err) + } +} + +func tcompare(t *testing.T, got, expect any) { + t.Helper() + if !reflect.DeepEqual(got, expect) { + t.Fatalf("got:\n%#v\nexpected:\n%#v", got, expect) + } +} + +func terrcode(t *testing.T, err error, code string) { + t.Helper() + if err == nil { + t.Fatalf("no error, expected error with code %q", code) + } + if xerr, ok := err.(webapi.Error); !ok { + t.Fatalf("got %v, expected webapi error with code %q", err, code) + } else if xerr.Code != code { + t.Fatalf("got error code %q, expected %q", xerr.Code, code) + } +} + +func TestServer(t *testing.T) { + mox.LimitersInit() + os.RemoveAll("../testdata/webapisrv/data") + mox.Context = ctxbg + mox.ConfigStaticPath = filepath.FromSlash("../testdata/webapisrv/mox.conf") + mox.MustLoadConfig(true, false) + defer store.Switchboard()() + err := queue.Init() + tcheckf(t, err, "queue init") + + log := mlog.New("webapisrv", nil) + acc, err := store.OpenAccount(log, "mjl") + tcheckf(t, err, "open account") + const pw0 = "te\u0301st \u00a0\u2002\u200a" // NFD and various unicode spaces. + const pw1 = "tést " // PRECIS normalized, with NFC. + err = acc.SetPassword(log, pw0) + tcheckf(t, err, "set password") + defer func() { + err := acc.Close() + log.Check(err, "closing account") + }() + + s := NewServer(100*1024, "/webapi/", false).(server) + hs := httptest.NewServer(s) + defer hs.Close() + + // server expects the mount path to be stripped already. + client := webapi.Client{BaseURL: hs.URL + "/v0/", Username: "mjl@mox.example", Password: pw0} + + testHTTPHdrsBody := func(s server, method, path string, headers map[string]string, body string, expCode int, expTooMany bool, expCT, expErrCode string) { + t.Helper() + + r := httptest.NewRequest(method, path, strings.NewReader(body)) + for k, v := range headers { + r.Header.Set(k, v) + } + w := httptest.NewRecorder() + s.ServeHTTP(w, r) + res := w.Result() + if res.StatusCode != http.StatusTooManyRequests || !expTooMany { + tcompare(t, res.StatusCode, expCode) + } + if expCT != "" { + tcompare(t, res.Header.Get("Content-Type"), expCT) + } + if expErrCode != "" { + dec := json.NewDecoder(res.Body) + dec.DisallowUnknownFields() + var apierr webapi.Error + err := dec.Decode(&apierr) + tcheckf(t, err, "decoding json error") + tcompare(t, apierr.Code, expErrCode) + } + } + testHTTP := func(method, path string, expCode int, expCT string) { + t.Helper() + testHTTPHdrsBody(s, method, path, nil, "", expCode, false, expCT, "") + } + + testHTTP("GET", "/", http.StatusSeeOther, "") + testHTTP("POST", "/", http.StatusMethodNotAllowed, "") + testHTTP("GET", "/v0/", http.StatusOK, "text/html; charset=utf-8") + testHTTP("GET", "/other/", http.StatusNotFound, "") + testHTTP("GET", "/v0/Send", http.StatusOK, "text/html; charset=utf-8") + testHTTP("GET", "/v0/MessageRawGet", http.StatusOK, "text/html; charset=utf-8") + testHTTP("GET", "/v0/Bogus", http.StatusNotFound, "") + testHTTP("PUT", "/v0/Send", http.StatusMethodNotAllowed, "") + testHTTP("POST", "/v0/Send", http.StatusUnauthorized, "") + + for i := 0; i < 11; i++ { + // Missing auth doesn't trigger auth rate limiter. + testHTTP("POST", "/v0/Send", http.StatusUnauthorized, "") + } + for i := 0; i < 21; i++ { + // Bad auth does. + expCode := http.StatusUnauthorized + tooMany := i >= 10 + if i == 20 { + expCode = http.StatusTooManyRequests + } + testHTTPHdrsBody(s, "POST", "/v0/Send", map[string]string{"Authorization": "Basic " + base64.StdEncoding.EncodeToString([]byte("mjl@mox.example:badpassword"))}, "", expCode, tooMany, "", "") + } + mox.LimitersInit() + + // Request with missing X-Forwarded-For. + sfwd := NewServer(100*1024, "/webapi/", true).(server) + testHTTPHdrsBody(sfwd, "POST", "/v0/Send", map[string]string{"Authorization": "Basic " + base64.StdEncoding.EncodeToString([]byte("mjl@mox.example:badpassword"))}, "", http.StatusInternalServerError, false, "", "") + + // Body must be form, not JSON. + authz := "Basic " + base64.StdEncoding.EncodeToString([]byte("mjl@mox.example:"+pw1)) + testHTTPHdrsBody(s, "POST", "/v0/Send", map[string]string{"Content-Type": "application/json", "Authorization": authz}, "{}", http.StatusBadRequest, false, "application/json; charset=utf-8", "protocol") + testHTTPHdrsBody(s, "POST", "/v0/Send", map[string]string{"Content-Type": "multipart/form-data", "Authorization": authz}, "not formdata", http.StatusBadRequest, false, "application/json; charset=utf-8", "protocol") + formAuth := map[string]string{ + "Content-Type": "application/x-www-form-urlencoded", + "Authorization": authz, + } + testHTTPHdrsBody(s, "POST", "/v0/Send", formAuth, "not encoded\n\n", http.StatusBadRequest, false, "application/json; charset=utf-8", "protocol") + // Missing "request". + testHTTPHdrsBody(s, "POST", "/v0/Send", formAuth, "", http.StatusBadRequest, false, "application/json; charset=utf-8", "protocol") + // "request" must be JSON. + testHTTPHdrsBody(s, "POST", "/v0/Send", formAuth, "request=notjson", http.StatusBadRequest, false, "application/json; charset=utf-8", "protocol") + // "request" must be JSON object. + testHTTPHdrsBody(s, "POST", "/v0/Send", formAuth, "request=[]", http.StatusBadRequest, false, "application/json; charset=utf-8", "protocol") + + // Send message. Look for the message in the queue. + now := time.Now() + yes := true + sendReq := webapi.SendRequest{ + Message: webapi.Message{ + From: []webapi.NameAddress{{Name: "møx", Address: "mjl@mox.example"}}, + To: []webapi.NameAddress{{Name: "móx", Address: "mjl+to@mox.example"}, {Address: "mjl+to2@mox.example"}}, + CC: []webapi.NameAddress{{Name: "möx", Address: "mjl+cc@mox.example"}}, + BCC: []webapi.NameAddress{{Name: "møx", Address: "mjl+bcc@mox.example"}}, + ReplyTo: []webapi.NameAddress{{Name: "reply1", Address: "mox+reply1@mox.example"}, {Name: "reply2", Address: "mox+reply2@mox.example"}}, + MessageID: "", + References: []string{"", ""}, + Date: &now, + Subject: "¡hello world!", + Text: "hi ☺\n", + HTML: ``, // Newline will be added. + }, + Extra: map[string]string{"a": "123"}, + Headers: [][2]string{{"x-custom", "header"}}, + InlineFiles: []webapi.File{ + { + Name: "x.png", + ContentType: "image/png", + ContentID: "", + Data: base64.StdEncoding.EncodeToString([]byte("png data")), + }, + }, + AttachedFiles: []webapi.File{ + { + Data: base64.StdEncoding.EncodeToString([]byte("%PDF-")), // Should be detected as PDF. + }, + }, + RequireTLS: &yes, + FutureRelease: &now, + SaveSent: true, + } + sendResp, err := client.Send(ctxbg, sendReq) + tcheckf(t, err, "send message") + tcompare(t, sendResp.MessageID, sendReq.Message.MessageID) + tcompare(t, len(sendResp.Submissions), 2+1+1) // 2 to, 1 cc, 1 bcc + subs := sendResp.Submissions + tcompare(t, subs[0].Address, "mjl+to@mox.example") + tcompare(t, subs[1].Address, "mjl+to2@mox.example") + tcompare(t, subs[2].Address, "mjl+cc@mox.example") + tcompare(t, subs[3].Address, "mjl+bcc@mox.example") + tcompare(t, subs[3].QueueMsgID, subs[0].QueueMsgID+3) + tcompare(t, subs[0].FromID, "") + // todo: look in queue for parameters. parse the message. + + // Send a custom multipart/form-data POST, with different request parameters, and + // additional files. + var sb strings.Builder + mp := multipart.NewWriter(&sb) + fdSendReq := webapi.SendRequest{ + Message: webapi.Message{ + To: []webapi.NameAddress{{Address: "møx@mox.example"}}, + // Let server assign date, message-id. + Subject: "test", + Text: "hi", + }, + // Don't let server add its own user-agent. + Headers: [][2]string{{"User-Agent", "test"}}, + } + sendReqBuf, err := json.Marshal(fdSendReq) + tcheckf(t, err, "send request") + mp.WriteField("request", string(sendReqBuf)) + // Two inline PDFs. + pw, err := mp.CreateFormFile("inlinefile", "test.pdf") + tcheckf(t, err, "create inline pdf file") + _, err = fmt.Fprint(pw, "%PDF-") + tcheckf(t, err, "write pdf") + pw, err = mp.CreateFormFile("inlinefile", "test.pdf") + tcheckf(t, err, "create second inline pdf file") + _, err = fmt.Fprint(pw, "%PDF-") + tcheckf(t, err, "write second pdf") + + // One attached PDF. + fh := textproto.MIMEHeader{} + fh.Set("Content-Disposition", `form-data; name="attachedfile"; filename="test.pdf"`) + fh.Set("Content-ID", "") + pw, err = mp.CreatePart(fh) + tcheckf(t, err, "create attached pdf file") + _, err = fmt.Fprint(pw, "%PDF-") + tcheckf(t, err, "write attached pdf") + fdct := mp.FormDataContentType() + err = mp.Close() + tcheckf(t, err, "close multipart") + + // Perform custom POST. + req, err := http.NewRequest("POST", hs.URL+"/v0/Send", strings.NewReader(sb.String())) + tcheckf(t, err, "new request") + req.Header.Set("Content-Type", fdct) + // Use a unique MAIL FROM id when delivering. + req.Header.Set("Authorization", "Basic "+base64.StdEncoding.EncodeToString([]byte("mjl+fromid@mox.example:"+pw1))) + resp, err := http.DefaultClient.Do(req) + tcheckf(t, err, "request multipart/form-data") + tcompare(t, resp.StatusCode, http.StatusOK) + var sendRes webapi.SendResult + err = json.NewDecoder(resp.Body).Decode(&sendRes) + tcheckf(t, err, "parse send response") + tcompare(t, sendRes.MessageID != "", true) + tcompare(t, len(sendRes.Submissions), 1) + tcompare(t, sendRes.Submissions[0].FromID != "", true) + + // Trigger various error conditions. + _, err = client.Send(ctxbg, webapi.SendRequest{ + Message: webapi.Message{ + To: []webapi.NameAddress{{Address: "mjl@mox.example"}}, + Subject: "test", + }, + }) + terrcode(t, err, "missingBody") + + _, err = client.Send(ctxbg, webapi.SendRequest{ + Message: webapi.Message{ + From: []webapi.NameAddress{{Address: "other@mox.example"}}, + To: []webapi.NameAddress{{Address: "mjl@mox.example"}}, + Subject: "test", + Text: "hi", + }, + }) + terrcode(t, err, "badFrom") + + _, err = client.Send(ctxbg, webapi.SendRequest{ + Message: webapi.Message{ + From: []webapi.NameAddress{{Address: "mox@mox.example"}, {Address: "mox@mox.example"}}, + To: []webapi.NameAddress{{Address: "mjl@mox.example"}}, + Subject: "test", + Text: "hi", + }, + }) + terrcode(t, err, "multipleFrom") + + _, err = client.Send(ctxbg, webapi.SendRequest{Message: webapi.Message{Subject: "test", Text: "hi"}}) + terrcode(t, err, "noRecipients") + + _, err = client.Send(ctxbg, webapi.SendRequest{ + Message: webapi.Message{ + MessageID: "missingltgt@localhost", + To: []webapi.NameAddress{{Address: "møx@mox.example"}}, + Subject: "test", + Text: "hi", + }, + }) + terrcode(t, err, "malformedMessageID") + + _, err = client.Send(ctxbg, webapi.SendRequest{ + Message: webapi.Message{ + MessageID: "missingltgt@localhost", + To: []webapi.NameAddress{{Address: "møx@mox.example"}}, + Subject: "test", + Text: "hi", + }, + }) + terrcode(t, err, "malformedMessageID") + + // todo: messageLimitReached, recipientLimitReached + + // SuppressionList + supListRes, err := client.SuppressionList(ctxbg, webapi.SuppressionListRequest{}) + tcheckf(t, err, "listing suppressions") + tcompare(t, len(supListRes.Suppressions), 0) + + // SuppressionAdd + supAddReq := webapi.SuppressionAddRequest{EmailAddress: "Remote.Last-catchall@xn--74h.localhost", Manual: true, Reason: "tests"} + _, err = client.SuppressionAdd(ctxbg, supAddReq) + tcheckf(t, err, "add address to suppression list") + _, err = client.SuppressionAdd(ctxbg, supAddReq) + terrcode(t, err, "error") // Already present. + supAddReq2 := webapi.SuppressionAddRequest{EmailAddress: "remotelast@☺.localhost", Manual: false, Reason: "tests"} + _, err = client.SuppressionAdd(ctxbg, supAddReq2) + terrcode(t, err, "error") // Already present, same base address. + supAddReq3 := webapi.SuppressionAddRequest{EmailAddress: "not an address"} + _, err = client.SuppressionAdd(ctxbg, supAddReq3) + terrcode(t, err, "badAddress") + + supListRes, err = client.SuppressionList(ctxbg, webapi.SuppressionListRequest{}) + tcheckf(t, err, "listing suppressions") + tcompare(t, len(supListRes.Suppressions), 1) + supListRes.Suppressions[0].Created = now + tcompare(t, supListRes.Suppressions, []webapi.Suppression{ + { + ID: 1, + Created: now, + Account: "mjl", + BaseAddress: "remotelast@☺.localhost", + OriginalAddress: "Remote.Last-catchall@☺.localhost", + Manual: true, + Reason: "tests", + }, + }) + + // SuppressionPresent + supPresRes, err := client.SuppressionPresent(ctxbg, webapi.SuppressionPresentRequest{EmailAddress: "not@localhost"}) + tcheckf(t, err, "address present") + tcompare(t, supPresRes.Present, false) + supPresRes, err = client.SuppressionPresent(ctxbg, webapi.SuppressionPresentRequest{EmailAddress: "remotelast@xn--74h.localhost"}) + tcheckf(t, err, "address present") + tcompare(t, supPresRes.Present, true) + supPresRes, err = client.SuppressionPresent(ctxbg, webapi.SuppressionPresentRequest{EmailAddress: "Remote.Last-catchall@☺.localhost"}) + tcheckf(t, err, "address present") + tcompare(t, supPresRes.Present, true) + supPresRes, err = client.SuppressionPresent(ctxbg, webapi.SuppressionPresentRequest{EmailAddress: "not an address"}) + terrcode(t, err, "badAddress") + + // SuppressionRemove + _, err = client.SuppressionRemove(ctxbg, webapi.SuppressionRemoveRequest{EmailAddress: "remote.LAST+more@☺.LocalHost"}) + tcheckf(t, err, "remove suppressed address") + _, err = client.SuppressionRemove(ctxbg, webapi.SuppressionRemoveRequest{EmailAddress: "remote.LAST+more@☺.LocalHost"}) + terrcode(t, err, "error") // Absent. + _, err = client.SuppressionRemove(ctxbg, webapi.SuppressionRemoveRequest{EmailAddress: "not an address"}) + terrcode(t, err, "badAddress") + + supListRes, err = client.SuppressionList(ctxbg, webapi.SuppressionListRequest{}) + tcheckf(t, err, "listing suppressions") + tcompare(t, len(supListRes.Suppressions), 0) + + // MessageGet, we retrieve the message we sent first. + msgRes, err := client.MessageGet(ctxbg, webapi.MessageGetRequest{MsgID: 1}) + tcheckf(t, err, "remove suppressed address") + sentMsg := sendReq.Message + sentMsg.BCC = []webapi.NameAddress{} // todo: the Sent message should contain the BCC. for webmail too. + sentMsg.Date = msgRes.Message.Date + sentMsg.HTML += "\n" + tcompare(t, msgRes.Message, sentMsg) + // The structure is: mixed (related (alternative text html) inline-png) attached-pdf). + pdfpart := msgRes.Structure.Parts[1] + tcompare(t, pdfpart.ContentType, "application/pdf") + // structure compared below, parsed again from raw message. + // todo: compare Meta + + _, err = client.MessageGet(ctxbg, webapi.MessageGetRequest{MsgID: 1 + 999}) + terrcode(t, err, "messageNotFound") + + // MessageRawGet + r, err := client.MessageRawGet(ctxbg, webapi.MessageRawGetRequest{MsgID: 1}) + tcheckf(t, err, "get raw message") + var b bytes.Buffer + _, err = io.Copy(&b, r) + r.Close() + tcheckf(t, err, "reading raw message") + part, err := message.EnsurePart(log.Logger, true, bytes.NewReader(b.Bytes()), int64(b.Len())) + tcheckf(t, err, "parsing raw message") + tcompare(t, webhook.PartStructure(&part), msgRes.Structure) + + _, err = client.MessageRawGet(ctxbg, webapi.MessageRawGetRequest{MsgID: 1 + 999}) + terrcode(t, err, "messageNotFound") + + // MessagePartGet + // The structure is: mixed (related (alternative text html) inline-png) attached-pdf). + r, err = client.MessagePartGet(ctxbg, webapi.MessagePartGetRequest{MsgID: 1, PartPath: []int{0, 0, 1}}) + tcheckf(t, err, "get message part") + tdata(t, r, sendReq.HTML+"\r\n") // Part returns the raw data with \r\n line endings. + r.Close() + + r, err = client.MessagePartGet(ctxbg, webapi.MessagePartGetRequest{MsgID: 1, PartPath: []int{}}) + tcheckf(t, err, "get message part") + r.Close() + + _, err = client.MessagePartGet(ctxbg, webapi.MessagePartGetRequest{MsgID: 1, PartPath: []int{2}}) + terrcode(t, err, "partNotFound") + + _, err = client.MessagePartGet(ctxbg, webapi.MessagePartGetRequest{MsgID: 1 + 999, PartPath: []int{}}) + terrcode(t, err, "messageNotFound") + + _, err = client.MessageFlagsAdd(ctxbg, webapi.MessageFlagsAddRequest{MsgID: 1, Flags: []string{`\answered`, "$Forwarded", "custom"}}) + tcheckf(t, err, "add flags") + + msgRes, err = client.MessageGet(ctxbg, webapi.MessageGetRequest{MsgID: 1}) + tcheckf(t, err, "get message") + tcompare(t, slices.Contains(msgRes.Meta.Flags, `\answered`), true) + tcompare(t, slices.Contains(msgRes.Meta.Flags, "$forwarded"), true) + tcompare(t, slices.Contains(msgRes.Meta.Flags, "custom"), true) + + // Setting duplicate flags doesn't make a change. + _, err = client.MessageFlagsAdd(ctxbg, webapi.MessageFlagsAddRequest{MsgID: 1, Flags: []string{`\Answered`, "$forwarded", "custom"}}) + tcheckf(t, err, "add flags") + msgRes2, err := client.MessageGet(ctxbg, webapi.MessageGetRequest{MsgID: 1}) + tcheckf(t, err, "get message") + tcompare(t, msgRes.Meta.Flags, msgRes2.Meta.Flags) + + // Non-existing message gives generic user error. + _, err = client.MessageFlagsAdd(ctxbg, webapi.MessageFlagsAddRequest{MsgID: 1 + 999, Flags: []string{`\answered`, "$Forwarded", "custom"}}) + terrcode(t, err, "messageNotFound") + + // MessageFlagsRemove + _, err = client.MessageFlagsRemove(ctxbg, webapi.MessageFlagsRemoveRequest{MsgID: 1, Flags: []string{`\Answered`, "$forwarded", "custom"}}) + tcheckf(t, err, "remove") + msgRes, err = client.MessageGet(ctxbg, webapi.MessageGetRequest{MsgID: 1}) + tcheckf(t, err, "get message") + tcompare(t, slices.Contains(msgRes.Meta.Flags, `\answered`), false) + tcompare(t, slices.Contains(msgRes.Meta.Flags, "$forwarded"), false) + tcompare(t, slices.Contains(msgRes.Meta.Flags, "custom"), false) + // Can try removing again, no change. + _, err = client.MessageFlagsRemove(ctxbg, webapi.MessageFlagsRemoveRequest{MsgID: 1, Flags: []string{`\Answered`, "$forwarded", "custom"}}) + tcheckf(t, err, "remove") + + _, err = client.MessageFlagsRemove(ctxbg, webapi.MessageFlagsRemoveRequest{MsgID: 1 + 999, Flags: []string{`\Answered`, "$forwarded", "custom"}}) + terrcode(t, err, "messageNotFound") + + // MessageMove + tcompare(t, msgRes.Meta.MailboxName, "Sent") + _, err = client.MessageMove(ctxbg, webapi.MessageMoveRequest{MsgID: 1, DestMailboxName: "Inbox"}) + tcheckf(t, err, "move to inbox") + msgRes, err = client.MessageGet(ctxbg, webapi.MessageGetRequest{MsgID: 1}) + tcheckf(t, err, "get message") + tcompare(t, msgRes.Meta.MailboxName, "Inbox") + _, err = client.MessageMove(ctxbg, webapi.MessageMoveRequest{MsgID: 1, DestMailboxName: "Bogus"}) + terrcode(t, err, "user") + _, err = client.MessageMove(ctxbg, webapi.MessageMoveRequest{MsgID: 1 + 999, DestMailboxName: "Inbox"}) + terrcode(t, err, "messageNotFound") + + // MessageDelete + _, err = client.MessageDelete(ctxbg, webapi.MessageDeleteRequest{MsgID: 1}) + tcheckf(t, err, "delete message") + _, err = client.MessageDelete(ctxbg, webapi.MessageDeleteRequest{MsgID: 1}) + terrcode(t, err, "user") // No longer. + _, err = client.MessageGet(ctxbg, webapi.MessageGetRequest{MsgID: 1}) + terrcode(t, err, "messageNotFound") // No longer. + _, err = client.MessageDelete(ctxbg, webapi.MessageDeleteRequest{MsgID: 1 + 999}) + terrcode(t, err, "messageNotFound") +} + +func tdata(t *testing.T, r io.Reader, exp string) { + t.Helper() + buf, err := io.ReadAll(r) + tcheckf(t, err, "reading body") + tcompare(t, string(buf), exp) +} diff --git a/webauth/webauth.go b/webauth/webauth.go index 8f7a64a..751a4b9 100644 --- a/webauth/webauth.go +++ b/webauth/webauth.go @@ -129,7 +129,7 @@ func Check(ctx context.Context, log mlog.Log, sessionAuth SessionAuth, kind stri return "", "", "", false } - ip := remoteIP(log, isForwarded, r) + ip := RemoteIP(log, isForwarded, r) if ip == nil { respondAuthError("user:noAuth", "cannot find ip for rate limit check (missing x-forwarded-for header?)") return "", "", "", false @@ -181,7 +181,7 @@ func Check(ctx context.Context, log mlog.Log, sessionAuth SessionAuth, kind stri return accountName, sessionToken, loginAddress, true } -func remoteIP(log mlog.Log, isForwarded bool, r *http.Request) net.IP { +func RemoteIP(log mlog.Log, isForwarded bool, r *http.Request) net.IP { if isForwarded { s := r.Header.Get("X-Forwarded-For") ipstr := strings.TrimSpace(strings.Split(s, ",")[0]) @@ -230,7 +230,7 @@ func Login(ctx context.Context, log mlog.Log, sessionAuth SessionAuth, kind, coo return "", &sherpa.Error{Code: "user:error", Message: "missing login token"} } - ip := remoteIP(log, isForwarded, r) + ip := RemoteIP(log, isForwarded, r) if ip == nil { return "", fmt.Errorf("cannot find ip for rate limit check (missing x-forwarded-for header?)") } diff --git a/webhook/webhook.go b/webhook/webhook.go new file mode 100644 index 0000000..386cf23 --- /dev/null +++ b/webhook/webhook.go @@ -0,0 +1,163 @@ +// Package webhook has data types used for webhooks about incoming and outgoing deliveries. +// +// See package webapi for details about the webapi and webhooks. +// +// Types [Incoming] and [Outgoing] represent the JSON bodies sent in the webhooks. +// New fields may be added in the future, unrecognized fields should be ignored +// when parsing for forward compatibility. +package webhook + +import ( + "strings" + "time" + + "github.com/mjl-/mox/message" +) + +// OutgoingEvent is an activity for an outgoing delivery. Either generated by the +// queue, or through an incoming DSN (delivery status notification) message. +type OutgoingEvent string + +// note: outgoing hook events are in ../queue/hooks.go, ../mox-/config.go, ../queue.go and ../webapi/gendoc.sh. keep in sync. + +// todo: in future have more events: for spam complaints, perhaps mdn's. + +const ( + // Message was accepted by a next-hop server. This does not necessarily mean the + // message has been delivered in the mailbox of the user. + EventDelivered OutgoingEvent = "delivered" + + // Outbound delivery was suppressed because the recipient address is on the + // suppression list of the account, or a simplified/base variant of the address is. + EventSuppressed OutgoingEvent = "suppressed" + + // A delivery attempt failed but delivery will be retried again later. + EventDelayed OutgoingEvent = "delayed" + + // Delivery of the message failed and will not be tried again. Also see the + // "Suppressing" field of [Outgoing]. + EventFailed OutgoingEvent = "failed" + + // Message was relayed into a system that does not generate DSNs. Should only + // happen when explicitly requested. + EventRelayed OutgoingEvent = "relayed" + + // Message was accepted and is being delivered to multiple recipients (e.g. the + // address was an alias/list), which may generate more DSNs. + EventExpanded OutgoingEvent = "expanded" + + // Message was removed from the queue, e.g. canceled by admin/user. + EventCanceled OutgoingEvent = "canceled" + + // An incoming message was received that was either a DSN with an unknown event + // type ("action"), or an incoming non-DSN-message was received for the unique + // per-outgoing-message address used for sending. + EventUnrecognized OutgoingEvent = "unrecognized" +) + +// Outgoing is the payload sent to webhook URLs for events about outgoing deliveries. +type Outgoing struct { + Version int // Format of hook, currently 0. + Event OutgoingEvent // Type of outgoing delivery event. + DSN bool // If this event was triggered by a delivery status notification message (DSN). + Suppressing bool // If true, this failure caused the address to be added to the suppression list. + QueueMsgID int64 // ID of message in queue. + FromID string // As used in MAIL FROM, can be empty, for incoming messages. + MessageID string // From Message-Id header, as set by submitter or us, with enclosing <>. + Subject string // Of original message. + WebhookQueued time.Time // When webhook was first queued for delivery. + SMTPCode int // Optional, for errors only, e.g. 451, 550. See package smtp for definitions. + SMTPEnhancedCode string // Optional, for errors only, e.g. 5.1.1. + Error string // Error message while delivering, or from DSN from remote, if any. + Extra map[string]string // Extra fields set for message during submit, through webapi call or through X-Mox-Extra-* headers during SMTP submission. +} + +// Incoming is the data sent to a webhook for incoming deliveries over SMTP. +type Incoming struct { + Version int // Format of hook, currently 0. + + // Message "From" header, typically has one address. + From []NameAddress + + To []NameAddress + CC []NameAddress + BCC []NameAddress // Often empty, even if you were a BCC recipient. + + // Optional Reply-To header, typically absent or with one address. + ReplyTo []NameAddress + + Subject string + + // Of Message-Id header, typically of the form "", includes <>. + MessageID string + + // Optional, the message-id this message is a reply to. Includes <>. + InReplyTo string + + // Optional, zero or more message-ids this message is a reply/forward/related to. + // The last entry is the most recent/immediate message this is a reply to. Earlier + // entries are the parents in a thread. Values include <>. + References []string + + // Time in "Date" message header, can be different from time received. + Date *time.Time + + // Contents of text/plain and/or text/html part (if any), with "\n" line-endings, + // converted from "\r\n". Values are truncated to 1MB (1024*1024 bytes). Use webapi + // MessagePartGet to retrieve the full part data. + Text string + HTML string + // No files, can be large. + + Structure Structure // Parsed form of MIME message. + Meta IncomingMeta // Details about message in storage, and SMTP transaction details. +} + +type IncomingMeta struct { + MsgID int64 // ID of message in storage, and to use in webapi calls like MessageGet. + MailFrom string // Address used during SMTP "MAIL FROM" command. + MailFromValidated bool // Whether SMTP MAIL FROM address was SPF-validated. + MsgFromValidated bool // Whether address in message "From"-header was DMARC(-like) validated. + RcptTo string // SMTP RCPT TO address used in SMTP. + DKIMVerifiedDomains []string // Verified domains from DKIM-signature in message. Can be different domain than used in addresses. + RemoteIP string // Where the message was delivered from. + Received time.Time // When message was received, may be different from the Date header. + MailboxName string // Mailbox where message was delivered to, based on configured rules. Defaults to "Inbox". + + // Whether this message was automated and should not receive automated replies. + // E.g. out of office or mailing list messages. + Automated bool +} + +type NameAddress struct { + Name string // Optional, human-readable "display name" of the addressee. + Address string // Required, email address. +} + +type Structure struct { + ContentType string // Lower case, e.g. text/plain. + ContentTypeParams map[string]string // Lower case keys, original case values, e.g. {"charset": "UTF-8"}. + ContentID string // Can be empty. Otherwise, should be a value wrapped in <>'s. For use in HTML, referenced as URI `cid:...`. + DecodedSize int64 // Size of content after decoding content-transfer-encoding. For text and HTML parts, this can be larger than the data returned since this size includes \r\n line endings. + Parts []Structure // Subparts of a multipart message, possibly recursive. +} + +// PartStructure returns a Structure for a parsed message part. +func PartStructure(p *message.Part) Structure { + parts := make([]Structure, len(p.Parts)) + for i := range p.Parts { + parts[i] = PartStructure(&p.Parts[i]) + } + s := Structure{ + ContentType: strings.ToLower(p.MediaType + "/" + p.MediaSubType), + ContentTypeParams: p.ContentTypeParams, + ContentID: p.ContentID, + DecodedSize: p.DecodedSize, + Parts: parts, + } + // Replace nil map with empty map, for easier to use JSON. + if s.ContentTypeParams == nil { + s.ContentTypeParams = map[string]string{} + } + return s +} diff --git a/webmail/api.go b/webmail/api.go index 3f07fb1..01a944d 100644 --- a/webmail/api.go +++ b/webmail/api.go @@ -17,7 +17,7 @@ import ( "net/textproto" "os" "runtime/debug" - "sort" + "slices" "strings" "sync" "time" @@ -45,6 +45,7 @@ import ( "github.com/mjl-/mox/smtpclient" "github.com/mjl-/mox/store" "github.com/mjl-/mox/webauth" + "github.com/mjl-/mox/webops" ) //go:embed api.json @@ -266,6 +267,20 @@ func xmessageID(ctx context.Context, tx *bstore.Tx, messageID int64) store.Messa return m } +func xrandomID(ctx context.Context, n int) string { + return base64.RawURLEncoding.EncodeToString(xrandom(ctx, n)) +} + +func xrandom(ctx context.Context, n int) []byte { + buf := make([]byte, n) + x, err := cryptorand.Read(buf) + xcheckf(ctx, err, "read random") + if x != n { + xcheckf(ctx, errors.New("short random read"), "read random") + } + return buf +} + // MessageSubmit sends a message by submitting it the outgoing email queue. The // message is sent to all addresses listed in the To, Cc and Bcc addresses, without // Bcc message header. @@ -273,9 +288,9 @@ func xmessageID(ctx context.Context, tx *bstore.Tx, messageID int64) store.Messa // If a Sent mailbox is configured, messages are added to it after submitting // to the delivery queue. func (w Webmail) MessageSubmit(ctx context.Context, m SubmitMessage) { - // Similar between ../smtpserver/server.go:/submit\( and ../webmail/webmail.go:/MessageSubmit\( + // Similar between ../smtpserver/server.go:/submit\( and ../webmail/api.go:/MessageSubmit\( and ../webapisrv/server.go:/Send\( - // todo: consider making this an HTTP POST, so we can upload as regular form, which is probably more efficient for encoding for the client and we can stream the data in. + // todo: consider making this an HTTP POST, so we can upload as regular form, which is probably more efficient for encoding for the client and we can stream the data in. also not unlike the webapi Submit method. // Prevent any accidental control characters, or attempts at getting bare \r or \n // into messages. @@ -358,10 +373,10 @@ func (w Webmail) MessageSubmit(ctx context.Context, m SubmitMessage) { msglimit, rcptlimit, err := acc.SendLimitReached(tx, rcpts) if msglimit >= 0 { metricSubmission.WithLabelValues("messagelimiterror").Inc() - xcheckuserf(ctx, errors.New("send message limit reached"), "checking outgoing rate limit") + xcheckuserf(ctx, errors.New("message limit reached"), "checking outgoing rate") } else if rcptlimit >= 0 { metricSubmission.WithLabelValues("recipientlimiterror").Inc() - xcheckuserf(ctx, errors.New("send message limit reached"), "checking outgoing rate limit") + xcheckuserf(ctx, errors.New("recipient limit reached"), "checking outgoing rate") } xcheckf(ctx, err, "checking send limit") }) @@ -455,15 +470,19 @@ func (w Webmail) MessageSubmit(ctx context.Context, m SubmitMessage) { if rp.Envelope == nil { return } - xc.Header("In-Reply-To", rp.Envelope.MessageID) - ref := h.Get("References") - if ref == "" { - ref = h.Get("In-Reply-To") + + if rp.Envelope.MessageID != "" { + xc.Header("In-Reply-To", rp.Envelope.MessageID) } - if ref != "" { - xc.Header("References", ref+"\r\n\t"+rp.Envelope.MessageID) - } else { - xc.Header("References", rp.Envelope.MessageID) + refs := h.Values("References") + if len(refs) == 0 && rp.Envelope.InReplyTo != "" { + refs = []string{rp.Envelope.InReplyTo} + } + if rp.Envelope.MessageID != "" { + refs = append(refs, rp.Envelope.MessageID) + } + if len(refs) > 0 { + xc.Header("References", strings.Join(refs, "\r\n\t")) } }) } @@ -480,7 +499,7 @@ func (w Webmail) MessageSubmit(ctx context.Context, m SubmitMessage) { xc.Header("Content-Type", fmt.Sprintf(`multipart/mixed; boundary="%s"`, mp.Boundary())) xc.Line() - textBody, ct, cte := xc.TextPart(m.TextBody) + textBody, ct, cte := xc.TextPart("plain", m.TextBody) textHdr := textproto.MIMEHeader{} textHdr.Set("Content-Type", ct) textHdr.Set("Content-Transfer-Encoding", cte) @@ -601,7 +620,7 @@ func (w Webmail) MessageSubmit(ctx context.Context, m SubmitMessage) { err = mp.Close() xcheckf(ctx, err, "writing mime multipart") } else { - textBody, ct, cte := xc.TextPart(m.TextBody) + textBody, ct, cte := xc.TextPart("plain", m.TextBody) xc.Header("Content-Type", ct) xc.Header("Content-Transfer-Encoding", cte) xc.Line() @@ -625,13 +644,25 @@ func (w Webmail) MessageSubmit(ctx context.Context, m SubmitMessage) { msgPrefix = dkimHeaders } - fromPath := smtp.Path{ - Localpart: fromAddr.Address.Localpart, - IPDomain: dns.IPDomain{Domain: fromAddr.Address.Domain}, + accConf, _ := acc.Conf() + loginAddr, err := smtp.ParseAddress(reqInfo.LoginAddress) + xcheckf(ctx, err, "parsing login address") + useFromID := slices.Contains(accConf.ParsedFromIDLoginAddresses, loginAddr) + fromPath := fromAddr.Address.Path() + var localpartBase string + if useFromID { + localpartBase = strings.SplitN(string(fromPath.Localpart), confDom.LocalpartCatchallSeparator, 2)[0] } qml := make([]queue.Msg, len(recipients)) now := time.Now() for i, rcpt := range recipients { + fp := fromPath + var fromID string + if useFromID { + fromID = xrandomID(ctx, 16) + fp.Localpart = smtp.Localpart(localpartBase + confDom.LocalpartCatchallSeparator + fromID) + } + // Don't use per-recipient unique message prefix when multiple recipients are // present, or the queue cannot deliver it in a single smtp transaction. var recvRcpt string @@ -644,7 +675,7 @@ func (w Webmail) MessageSubmit(ctx context.Context, m SubmitMessage) { Localpart: rcpt.Localpart, IPDomain: dns.IPDomain{Domain: rcpt.Domain}, } - qm := queue.MakeMsg(fromPath, toPath, xc.Has8bit, xc.SMTPUTF8, msgSize, messageID, []byte(rcptMsgPrefix), m.RequireTLS, now) + qm := queue.MakeMsg(fp, toPath, xc.Has8bit, xc.SMTPUTF8, msgSize, messageID, []byte(rcptMsgPrefix), m.RequireTLS, now, m.Subject) if m.FutureRelease != nil { ival := time.Until(*m.FutureRelease) if ival < 0 { @@ -656,6 +687,8 @@ func (w Webmail) MessageSubmit(ctx context.Context, m SubmitMessage) { qm.FutureReleaseRequest = "until;" + m.FutureRelease.Format(time.RFC3339) // todo: possibly add a header to the message stored in the Sent mailbox to indicate it was scheduled for later delivery. } + qm.FromID = fromID + // no qm.Extra from webmail qml[i] = qm } err = queue.Add(ctx, log, reqInfo.AccountName, dataFile, qml...) @@ -764,124 +797,13 @@ func (Webmail) MessageMove(ctx context.Context, messageIDs []int64, mailboxID in log.Check(err, "closing account") }() - acc.WithRLock(func() { - retrain := make([]store.Message, 0, len(messageIDs)) - removeChanges := map[int64]store.ChangeRemoveUIDs{} - // n adds, 1 remove, 2 mailboxcounts, optimistic and at least for a single message. - changes := make([]store.Change, 0, len(messageIDs)+3) + xops.MessageMove(ctx, log, acc, messageIDs, "", mailboxID) +} - xdbwrite(ctx, acc, func(tx *bstore.Tx) { - var mbSrc store.Mailbox - var modseq store.ModSeq - - mbDst := xmailboxID(ctx, tx, mailboxID) - - if len(messageIDs) == 0 { - return - } - - keywords := map[string]struct{}{} - - for _, mid := range messageIDs { - m := xmessageID(ctx, tx, mid) - - // We may have loaded this mailbox in the previous iteration of this loop. - if m.MailboxID != mbSrc.ID { - if mbSrc.ID != 0 { - err = tx.Update(&mbSrc) - xcheckf(ctx, err, "updating source mailbox counts") - changes = append(changes, mbSrc.ChangeCounts()) - } - mbSrc = xmailboxID(ctx, tx, m.MailboxID) - } - - if mbSrc.ID == mailboxID { - // Client should filter out messages that are already in mailbox. - xcheckuserf(ctx, errors.New("already in destination mailbox"), "moving message") - } - - if modseq == 0 { - modseq, err = acc.NextModSeq(tx) - xcheckf(ctx, err, "assigning next modseq") - } - - ch := removeChanges[m.MailboxID] - ch.UIDs = append(ch.UIDs, m.UID) - ch.ModSeq = modseq - ch.MailboxID = m.MailboxID - removeChanges[m.MailboxID] = ch - - // Copy of message record that we'll insert when UID is freed up. - om := m - om.PrepareExpunge() - om.ID = 0 // Assign new ID. - om.ModSeq = modseq - - mbSrc.Sub(m.MailboxCounts()) - - if mbDst.Trash { - m.Seen = true - } - conf, _ := acc.Conf() - m.MailboxID = mbDst.ID - if m.IsReject && m.MailboxDestinedID != 0 { - // Incorrectly delivered to Rejects mailbox. Adjust MailboxOrigID so this message - // is used for reputation calculation during future deliveries. - m.MailboxOrigID = m.MailboxDestinedID - m.IsReject = false - m.Seen = false - } - m.UID = mbDst.UIDNext - m.ModSeq = modseq - mbDst.UIDNext++ - m.JunkFlagsForMailbox(mbDst, conf) - err = tx.Update(&m) - xcheckf(ctx, err, "updating moved message in database") - - // Now that UID is unused, we can insert the old record again. - err = tx.Insert(&om) - xcheckf(ctx, err, "inserting record for expunge after moving message") - - mbDst.Add(m.MailboxCounts()) - - changes = append(changes, m.ChangeAddUID()) - retrain = append(retrain, m) - - for _, kw := range m.Keywords { - keywords[kw] = struct{}{} - } - } - - err = tx.Update(&mbSrc) - xcheckf(ctx, err, "updating source mailbox counts") - - changes = append(changes, mbSrc.ChangeCounts(), mbDst.ChangeCounts()) - - // Ensure destination mailbox has keywords of the moved messages. - var mbKwChanged bool - mbDst.Keywords, mbKwChanged = store.MergeKeywords(mbDst.Keywords, maps.Keys(keywords)) - if mbKwChanged { - changes = append(changes, mbDst.ChangeKeywords()) - } - - err = tx.Update(&mbDst) - xcheckf(ctx, err, "updating mailbox with uidnext") - - err = acc.RetrainMessages(ctx, log, tx, retrain, false) - xcheckf(ctx, err, "retraining messages after move") - }) - - // Ensure UIDs of the removed message are in increasing order. It is quite common - // for all messages to be from a single source mailbox, meaning this is just one - // change, for which we preallocated space. - for _, ch := range removeChanges { - sort.Slice(ch.UIDs, func(i, j int) bool { - return ch.UIDs[i] < ch.UIDs[j] - }) - changes = append(changes, ch) - } - store.BroadcastChanges(acc, changes) - }) +var xops = webops.XOps{ + DBWrite: xdbwrite, + Checkf: xcheckf, + Checkuserf: xcheckuserf, } // MessageDelete permanently deletes messages, without moving them to the Trash mailbox. @@ -899,86 +821,7 @@ func (Webmail) MessageDelete(ctx context.Context, messageIDs []int64) { return } - acc.WithWLock(func() { - removeChanges := map[int64]store.ChangeRemoveUIDs{} - changes := make([]store.Change, 0, len(messageIDs)+1) // n remove, 1 mailbox counts - - xdbwrite(ctx, acc, func(tx *bstore.Tx) { - var modseq store.ModSeq - var mb store.Mailbox - remove := make([]store.Message, 0, len(messageIDs)) - - var totalSize int64 - for _, mid := range messageIDs { - m := xmessageID(ctx, tx, mid) - totalSize += m.Size - - if m.MailboxID != mb.ID { - if mb.ID != 0 { - err := tx.Update(&mb) - xcheckf(ctx, err, "updating mailbox counts") - changes = append(changes, mb.ChangeCounts()) - } - mb = xmailboxID(ctx, tx, m.MailboxID) - } - - qmr := bstore.QueryTx[store.Recipient](tx) - qmr.FilterEqual("MessageID", m.ID) - _, err = qmr.Delete() - xcheckf(ctx, err, "removing message recipients") - - mb.Sub(m.MailboxCounts()) - - if modseq == 0 { - modseq, err = acc.NextModSeq(tx) - xcheckf(ctx, err, "assigning next modseq") - } - m.Expunged = true - m.ModSeq = modseq - err = tx.Update(&m) - xcheckf(ctx, err, "marking message as expunged") - - ch := removeChanges[m.MailboxID] - ch.UIDs = append(ch.UIDs, m.UID) - ch.MailboxID = m.MailboxID - ch.ModSeq = modseq - removeChanges[m.MailboxID] = ch - remove = append(remove, m) - } - - if mb.ID != 0 { - err := tx.Update(&mb) - xcheckf(ctx, err, "updating count in mailbox") - changes = append(changes, mb.ChangeCounts()) - } - - err = acc.AddMessageSize(log, tx, -totalSize) - xcheckf(ctx, err, "updating disk usage") - - // Mark removed messages as not needing training, then retrain them, so if they - // were trained, they get untrained. - for i := range remove { - remove[i].Junk = false - remove[i].Notjunk = false - } - err = acc.RetrainMessages(ctx, log, tx, remove, true) - xcheckf(ctx, err, "untraining deleted messages") - }) - - for _, ch := range removeChanges { - sort.Slice(ch.UIDs, func(i, j int) bool { - return ch.UIDs[i] < ch.UIDs[j] - }) - changes = append(changes, ch) - } - store.BroadcastChanges(acc, changes) - }) - - for _, mID := range messageIDs { - p := acc.MessagePath(mID) - err := os.Remove(p) - log.Check(err, "removing message file for expunge") - } + xops.MessageDelete(ctx, log, acc, messageIDs) } // FlagsAdd adds flags, either system flags like \Seen or custom keywords. The @@ -993,76 +836,7 @@ func (Webmail) FlagsAdd(ctx context.Context, messageIDs []int64, flaglist []stri log.Check(err, "closing account") }() - flags, keywords, err := store.ParseFlagsKeywords(flaglist) - xcheckuserf(ctx, err, "parsing flags") - - acc.WithRLock(func() { - var changes []store.Change - - xdbwrite(ctx, acc, func(tx *bstore.Tx) { - var modseq store.ModSeq - var retrain []store.Message - var mb, origmb store.Mailbox - - for _, mid := range messageIDs { - m := xmessageID(ctx, tx, mid) - - if mb.ID != m.MailboxID { - if mb.ID != 0 { - err := tx.Update(&mb) - xcheckf(ctx, err, "updating mailbox") - if mb.MailboxCounts != origmb.MailboxCounts { - changes = append(changes, mb.ChangeCounts()) - } - if mb.KeywordsChanged(origmb) { - changes = append(changes, mb.ChangeKeywords()) - } - } - mb = xmailboxID(ctx, tx, m.MailboxID) - origmb = mb - } - mb.Keywords, _ = store.MergeKeywords(mb.Keywords, keywords) - - mb.Sub(m.MailboxCounts()) - oflags := m.Flags - m.Flags = m.Flags.Set(flags, flags) - var kwChanged bool - m.Keywords, kwChanged = store.MergeKeywords(m.Keywords, keywords) - mb.Add(m.MailboxCounts()) - - if m.Flags == oflags && !kwChanged { - continue - } - - if modseq == 0 { - modseq, err = acc.NextModSeq(tx) - xcheckf(ctx, err, "assigning next modseq") - } - m.ModSeq = modseq - err = tx.Update(&m) - xcheckf(ctx, err, "updating message") - - changes = append(changes, m.ChangeFlags(oflags)) - retrain = append(retrain, m) - } - - if mb.ID != 0 { - err := tx.Update(&mb) - xcheckf(ctx, err, "updating mailbox") - if mb.MailboxCounts != origmb.MailboxCounts { - changes = append(changes, mb.ChangeCounts()) - } - if mb.KeywordsChanged(origmb) { - changes = append(changes, mb.ChangeKeywords()) - } - } - - err = acc.RetrainMessages(ctx, log, tx, retrain, false) - xcheckf(ctx, err, "retraining messages") - }) - - store.BroadcastChanges(acc, changes) - }) + xops.MessageFlagsAdd(ctx, log, acc, messageIDs, flaglist) } // FlagsClear clears flags, either system flags like \Seen or custom keywords. @@ -1076,71 +850,7 @@ func (Webmail) FlagsClear(ctx context.Context, messageIDs []int64, flaglist []st log.Check(err, "closing account") }() - flags, keywords, err := store.ParseFlagsKeywords(flaglist) - xcheckuserf(ctx, err, "parsing flags") - - acc.WithRLock(func() { - var retrain []store.Message - var changes []store.Change - - xdbwrite(ctx, acc, func(tx *bstore.Tx) { - var modseq store.ModSeq - var mb, origmb store.Mailbox - - for _, mid := range messageIDs { - m := xmessageID(ctx, tx, mid) - - if mb.ID != m.MailboxID { - if mb.ID != 0 { - err := tx.Update(&mb) - xcheckf(ctx, err, "updating counts for mailbox") - if mb.MailboxCounts != origmb.MailboxCounts { - changes = append(changes, mb.ChangeCounts()) - } - // note: cannot remove keywords from mailbox by removing keywords from message. - } - mb = xmailboxID(ctx, tx, m.MailboxID) - origmb = mb - } - - oflags := m.Flags - mb.Sub(m.MailboxCounts()) - m.Flags = m.Flags.Set(flags, store.Flags{}) - var changed bool - m.Keywords, changed = store.RemoveKeywords(m.Keywords, keywords) - mb.Add(m.MailboxCounts()) - - if m.Flags == oflags && !changed { - continue - } - - if modseq == 0 { - modseq, err = acc.NextModSeq(tx) - xcheckf(ctx, err, "assigning next modseq") - } - m.ModSeq = modseq - err = tx.Update(&m) - xcheckf(ctx, err, "updating message") - - changes = append(changes, m.ChangeFlags(oflags)) - retrain = append(retrain, m) - } - - if mb.ID != 0 { - err := tx.Update(&mb) - xcheckf(ctx, err, "updating keywords in mailbox") - if mb.MailboxCounts != origmb.MailboxCounts { - changes = append(changes, mb.ChangeCounts()) - } - // note: cannot remove keywords from mailbox by removing keywords from message. - } - - err = acc.RetrainMessages(ctx, log, tx, retrain, false) - xcheckf(ctx, err, "retraining messages") - }) - - store.BroadcastChanges(acc, changes) - }) + xops.MessageFlagsClear(ctx, log, acc, messageIDs, flaglist) } // MailboxCreate creates a new mailbox. diff --git a/webmail/api.json b/webmail/api.json index 45f4d64..d6958bc 100644 --- a/webmail/api.json +++ b/webmail/api.json @@ -982,14 +982,14 @@ }, { "Name": "InReplyTo", - "Docs": "", + "Docs": "From In-Reply-To header, includes \u003c\u003e.", "Typewords": [ "string" ] }, { "Name": "MessageID", - "Docs": "", + "Docs": "From Message-Id header, includes \u003c\u003e.", "Typewords": [ "string" ] @@ -1918,7 +1918,7 @@ }, { "Name": "DSN", - "Docs": "If this message is a DSN. For DSNs, we don't look at the subject when matching threads.", + "Docs": "If this message is a DSN, generated by us or received. For DSNs, we don't look at the subject when matching threads.", "Typewords": [ "bool" ] diff --git a/webmail/api.ts b/webmail/api.ts index 8dadc76..2b8c9a3 100644 --- a/webmail/api.ts +++ b/webmail/api.ts @@ -99,8 +99,8 @@ export interface Envelope { To?: Address[] | null CC?: Address[] | null BCC?: Address[] | null - InReplyTo: string - MessageID: string + InReplyTo: string // From In-Reply-To header, includes <>. + MessageID: string // From Message-Id header, includes <>. } // Address as used in From and To headers. @@ -302,7 +302,7 @@ export interface Message { ThreadMuted: boolean // If set, newly delivered child messages are automatically marked as read. This field is copied to new child messages. Changes are propagated to the webmail client. ThreadCollapsed: boolean // If set, this (sub)thread is collapsed in the webmail client, for threading mode "on" (mode "unread" ignores it). This field is copied to new child message. Changes are propagated to the webmail client. IsMailingList: boolean // If received message was known to match a mailing list rule (with modified junk filtering). - DSN: boolean // If this message is a DSN. For DSNs, we don't look at the subject when matching threads. + DSN: boolean // If this message is a DSN, generated by us or received. For DSNs, we don't look at the subject when matching threads. ReceivedTLSVersion: number // 0 if unknown, 1 if plaintext/no TLS, otherwise TLS cipher suite. ReceivedTLSCipherSuite: number ReceivedRequireTLS: boolean // Whether RequireTLS was known to be used for incoming delivery. diff --git a/webmail/api_test.go b/webmail/api_test.go index 51f114f..7d3f902 100644 --- a/webmail/api_test.go +++ b/webmail/api_test.go @@ -274,6 +274,8 @@ func TestAPI(t *testing.T) { // MessageSubmit queue.Localserve = true // Deliver directly to us instead attempting actual delivery. + err = queue.Init() + tcheck(t, err, "queue init") api.MessageSubmit(ctx, SubmitMessage{ From: "mjl@mox.example", To: []string{"mjl+to@mox.example", "mjl to2 "}, diff --git a/webmail/msg.js b/webmail/msg.js index ea5962c..94c14d1 100644 --- a/webmail/msg.js +++ b/webmail/msg.js @@ -220,6 +220,7 @@ const [dom, style, attr, prop] = (function () { autocomplete: (s) => _attr('autocomplete', s), list: (s) => _attr('list', s), form: (s) => _attr('form', s), + size: (s) => _attr('size', s), }; const style = (x) => { return { _styles: x }; }; const prop = (x) => { return { _props: x }; }; diff --git a/webmail/text.js b/webmail/text.js index 59b275b..ab98a22 100644 --- a/webmail/text.js +++ b/webmail/text.js @@ -220,6 +220,7 @@ const [dom, style, attr, prop] = (function () { autocomplete: (s) => _attr('autocomplete', s), list: (s) => _attr('list', s), form: (s) => _attr('form', s), + size: (s) => _attr('size', s), }; const style = (x) => { return { _styles: x }; }; const prop = (x) => { return { _props: x }; }; diff --git a/webmail/webmail.go b/webmail/webmail.go index bc70c7b..6465995 100644 --- a/webmail/webmail.go +++ b/webmail/webmail.go @@ -77,7 +77,7 @@ var webmailtextHTML []byte var webmailtextJS []byte var ( - // Similar between ../webmail/webmail.go:/metricSubmission and ../smtpserver/server.go:/metricSubmission + // Similar between ../webmail/webmail.go:/metricSubmission and ../smtpserver/server.go:/metricSubmission and ../webapisrv/server.go:/metricSubmission metricSubmission = promauto.NewCounterVec( prometheus.CounterOpts{ Name: "mox_webmail_submission_total", diff --git a/webmail/webmail.js b/webmail/webmail.js index f8b528b..f34889e 100644 --- a/webmail/webmail.js +++ b/webmail/webmail.js @@ -220,6 +220,7 @@ const [dom, style, attr, prop] = (function () { autocomplete: (s) => _attr('autocomplete', s), list: (s) => _attr('list', s), form: (s) => _attr('form', s), + size: (s) => _attr('size', s), }; const style = (x) => { return { _styles: x }; }; const prop = (x) => { return { _props: x }; }; diff --git a/webops/xops.go b/webops/xops.go new file mode 100644 index 0000000..b6dff5a --- /dev/null +++ b/webops/xops.go @@ -0,0 +1,480 @@ +// Package webops implements shared functionality between webapisrv and webmail. +package webops + +import ( + "context" + "errors" + "fmt" + "io" + "os" + "sort" + + "golang.org/x/exp/maps" + + "github.com/mjl-/bstore" + + "github.com/mjl-/mox/message" + "github.com/mjl-/mox/mlog" + "github.com/mjl-/mox/store" +) + +var ErrMessageNotFound = errors.New("no such message") + +type XOps struct { + DBWrite func(ctx context.Context, acc *store.Account, fn func(tx *bstore.Tx)) + Checkf func(ctx context.Context, err error, format string, args ...any) + Checkuserf func(ctx context.Context, err error, format string, args ...any) +} + +func (x XOps) mailboxID(ctx context.Context, tx *bstore.Tx, mailboxID int64) store.Mailbox { + if mailboxID == 0 { + x.Checkuserf(ctx, errors.New("invalid zero mailbox ID"), "getting mailbox") + } + mb := store.Mailbox{ID: mailboxID} + err := tx.Get(&mb) + if err == bstore.ErrAbsent { + x.Checkuserf(ctx, err, "getting mailbox") + } + x.Checkf(ctx, err, "getting mailbox") + return mb +} + +// messageID returns a non-expunged message or panics with a sherpa error. +func (x XOps) messageID(ctx context.Context, tx *bstore.Tx, messageID int64) store.Message { + if messageID == 0 { + x.Checkuserf(ctx, errors.New("invalid zero message id"), "getting message") + } + m := store.Message{ID: messageID} + err := tx.Get(&m) + if err == bstore.ErrAbsent { + x.Checkuserf(ctx, ErrMessageNotFound, "getting message") + } else if err == nil && m.Expunged { + x.Checkuserf(ctx, errors.New("message was removed"), "getting message") + } + x.Checkf(ctx, err, "getting message") + return m +} + +func (x XOps) MessageDelete(ctx context.Context, log mlog.Log, acc *store.Account, messageIDs []int64) { + acc.WithWLock(func() { + removeChanges := map[int64]store.ChangeRemoveUIDs{} + changes := make([]store.Change, 0, len(messageIDs)+1) // n remove, 1 mailbox counts + + x.DBWrite(ctx, acc, func(tx *bstore.Tx) { + var modseq store.ModSeq + var mb store.Mailbox + remove := make([]store.Message, 0, len(messageIDs)) + + var totalSize int64 + for _, mid := range messageIDs { + m := x.messageID(ctx, tx, mid) + totalSize += m.Size + + if m.MailboxID != mb.ID { + if mb.ID != 0 { + err := tx.Update(&mb) + x.Checkf(ctx, err, "updating mailbox counts") + changes = append(changes, mb.ChangeCounts()) + } + mb = x.mailboxID(ctx, tx, m.MailboxID) + } + + qmr := bstore.QueryTx[store.Recipient](tx) + qmr.FilterEqual("MessageID", m.ID) + _, err := qmr.Delete() + x.Checkf(ctx, err, "removing message recipients") + + mb.Sub(m.MailboxCounts()) + + if modseq == 0 { + modseq, err = acc.NextModSeq(tx) + x.Checkf(ctx, err, "assigning next modseq") + } + m.Expunged = true + m.ModSeq = modseq + err = tx.Update(&m) + x.Checkf(ctx, err, "marking message as expunged") + + ch := removeChanges[m.MailboxID] + ch.UIDs = append(ch.UIDs, m.UID) + ch.MailboxID = m.MailboxID + ch.ModSeq = modseq + removeChanges[m.MailboxID] = ch + remove = append(remove, m) + } + + if mb.ID != 0 { + err := tx.Update(&mb) + x.Checkf(ctx, err, "updating count in mailbox") + changes = append(changes, mb.ChangeCounts()) + } + + err := acc.AddMessageSize(log, tx, -totalSize) + x.Checkf(ctx, err, "updating disk usage") + + // Mark removed messages as not needing training, then retrain them, so if they + // were trained, they get untrained. + for i := range remove { + remove[i].Junk = false + remove[i].Notjunk = false + } + err = acc.RetrainMessages(ctx, log, tx, remove, true) + x.Checkf(ctx, err, "untraining deleted messages") + }) + + for _, ch := range removeChanges { + sort.Slice(ch.UIDs, func(i, j int) bool { + return ch.UIDs[i] < ch.UIDs[j] + }) + changes = append(changes, ch) + } + store.BroadcastChanges(acc, changes) + }) + + for _, mID := range messageIDs { + p := acc.MessagePath(mID) + err := os.Remove(p) + log.Check(err, "removing message file for expunge") + } +} + +func (x XOps) MessageFlagsAdd(ctx context.Context, log mlog.Log, acc *store.Account, messageIDs []int64, flaglist []string) { + flags, keywords, err := store.ParseFlagsKeywords(flaglist) + x.Checkuserf(ctx, err, "parsing flags") + + acc.WithRLock(func() { + var changes []store.Change + + x.DBWrite(ctx, acc, func(tx *bstore.Tx) { + var modseq store.ModSeq + var retrain []store.Message + var mb, origmb store.Mailbox + + for _, mid := range messageIDs { + m := x.messageID(ctx, tx, mid) + + if mb.ID != m.MailboxID { + if mb.ID != 0 { + err := tx.Update(&mb) + x.Checkf(ctx, err, "updating mailbox") + if mb.MailboxCounts != origmb.MailboxCounts { + changes = append(changes, mb.ChangeCounts()) + } + if mb.KeywordsChanged(origmb) { + changes = append(changes, mb.ChangeKeywords()) + } + } + mb = x.mailboxID(ctx, tx, m.MailboxID) + origmb = mb + } + mb.Keywords, _ = store.MergeKeywords(mb.Keywords, keywords) + + mb.Sub(m.MailboxCounts()) + oflags := m.Flags + m.Flags = m.Flags.Set(flags, flags) + var kwChanged bool + m.Keywords, kwChanged = store.MergeKeywords(m.Keywords, keywords) + mb.Add(m.MailboxCounts()) + + if m.Flags == oflags && !kwChanged { + continue + } + + if modseq == 0 { + modseq, err = acc.NextModSeq(tx) + x.Checkf(ctx, err, "assigning next modseq") + } + m.ModSeq = modseq + err = tx.Update(&m) + x.Checkf(ctx, err, "updating message") + + changes = append(changes, m.ChangeFlags(oflags)) + retrain = append(retrain, m) + } + + if mb.ID != 0 { + err := tx.Update(&mb) + x.Checkf(ctx, err, "updating mailbox") + if mb.MailboxCounts != origmb.MailboxCounts { + changes = append(changes, mb.ChangeCounts()) + } + if mb.KeywordsChanged(origmb) { + changes = append(changes, mb.ChangeKeywords()) + } + } + + err = acc.RetrainMessages(ctx, log, tx, retrain, false) + x.Checkf(ctx, err, "retraining messages") + }) + + store.BroadcastChanges(acc, changes) + }) +} + +func (x XOps) MessageFlagsClear(ctx context.Context, log mlog.Log, acc *store.Account, messageIDs []int64, flaglist []string) { + flags, keywords, err := store.ParseFlagsKeywords(flaglist) + x.Checkuserf(ctx, err, "parsing flags") + + acc.WithRLock(func() { + var retrain []store.Message + var changes []store.Change + + x.DBWrite(ctx, acc, func(tx *bstore.Tx) { + var modseq store.ModSeq + var mb, origmb store.Mailbox + + for _, mid := range messageIDs { + m := x.messageID(ctx, tx, mid) + + if mb.ID != m.MailboxID { + if mb.ID != 0 { + err := tx.Update(&mb) + x.Checkf(ctx, err, "updating counts for mailbox") + if mb.MailboxCounts != origmb.MailboxCounts { + changes = append(changes, mb.ChangeCounts()) + } + // note: cannot remove keywords from mailbox by removing keywords from message. + } + mb = x.mailboxID(ctx, tx, m.MailboxID) + origmb = mb + } + + oflags := m.Flags + mb.Sub(m.MailboxCounts()) + m.Flags = m.Flags.Set(flags, store.Flags{}) + var changed bool + m.Keywords, changed = store.RemoveKeywords(m.Keywords, keywords) + mb.Add(m.MailboxCounts()) + + if m.Flags == oflags && !changed { + continue + } + + if modseq == 0 { + modseq, err = acc.NextModSeq(tx) + x.Checkf(ctx, err, "assigning next modseq") + } + m.ModSeq = modseq + err = tx.Update(&m) + x.Checkf(ctx, err, "updating message") + + changes = append(changes, m.ChangeFlags(oflags)) + retrain = append(retrain, m) + } + + if mb.ID != 0 { + err := tx.Update(&mb) + x.Checkf(ctx, err, "updating keywords in mailbox") + if mb.MailboxCounts != origmb.MailboxCounts { + changes = append(changes, mb.ChangeCounts()) + } + // note: cannot remove keywords from mailbox by removing keywords from message. + } + + err = acc.RetrainMessages(ctx, log, tx, retrain, false) + x.Checkf(ctx, err, "retraining messages") + }) + + store.BroadcastChanges(acc, changes) + }) +} + +// MessageMove moves messages to the mailbox represented by mailboxName, or to mailboxID if mailboxName is empty. +func (x XOps) MessageMove(ctx context.Context, log mlog.Log, acc *store.Account, messageIDs []int64, mailboxName string, mailboxID int64) { + acc.WithRLock(func() { + retrain := make([]store.Message, 0, len(messageIDs)) + removeChanges := map[int64]store.ChangeRemoveUIDs{} + // n adds, 1 remove, 2 mailboxcounts, optimistic and at least for a single message. + changes := make([]store.Change, 0, len(messageIDs)+3) + + x.DBWrite(ctx, acc, func(tx *bstore.Tx) { + var mbSrc store.Mailbox + var modseq store.ModSeq + + if mailboxName != "" { + mb, err := acc.MailboxFind(tx, mailboxName) + x.Checkf(ctx, err, "looking up mailbox name") + if mb == nil { + x.Checkuserf(ctx, errors.New("not found"), "looking up mailbox name") + } else { + mailboxID = mb.ID + } + } + + mbDst := x.mailboxID(ctx, tx, mailboxID) + + if len(messageIDs) == 0 { + return + } + + keywords := map[string]struct{}{} + + for _, mid := range messageIDs { + m := x.messageID(ctx, tx, mid) + + // We may have loaded this mailbox in the previous iteration of this loop. + if m.MailboxID != mbSrc.ID { + if mbSrc.ID != 0 { + err := tx.Update(&mbSrc) + x.Checkf(ctx, err, "updating source mailbox counts") + changes = append(changes, mbSrc.ChangeCounts()) + } + mbSrc = x.mailboxID(ctx, tx, m.MailboxID) + } + + if mbSrc.ID == mailboxID { + // Client should filter out messages that are already in mailbox. + x.Checkuserf(ctx, errors.New("already in destination mailbox"), "moving message") + } + + var err error + if modseq == 0 { + modseq, err = acc.NextModSeq(tx) + x.Checkf(ctx, err, "assigning next modseq") + } + + ch := removeChanges[m.MailboxID] + ch.UIDs = append(ch.UIDs, m.UID) + ch.ModSeq = modseq + ch.MailboxID = m.MailboxID + removeChanges[m.MailboxID] = ch + + // Copy of message record that we'll insert when UID is freed up. + om := m + om.PrepareExpunge() + om.ID = 0 // Assign new ID. + om.ModSeq = modseq + + mbSrc.Sub(m.MailboxCounts()) + + if mbDst.Trash { + m.Seen = true + } + conf, _ := acc.Conf() + m.MailboxID = mbDst.ID + if m.IsReject && m.MailboxDestinedID != 0 { + // Incorrectly delivered to Rejects mailbox. Adjust MailboxOrigID so this message + // is used for reputation calculation during future deliveries. + m.MailboxOrigID = m.MailboxDestinedID + m.IsReject = false + m.Seen = false + } + m.UID = mbDst.UIDNext + m.ModSeq = modseq + mbDst.UIDNext++ + m.JunkFlagsForMailbox(mbDst, conf) + err = tx.Update(&m) + x.Checkf(ctx, err, "updating moved message in database") + + // Now that UID is unused, we can insert the old record again. + err = tx.Insert(&om) + x.Checkf(ctx, err, "inserting record for expunge after moving message") + + mbDst.Add(m.MailboxCounts()) + + changes = append(changes, m.ChangeAddUID()) + retrain = append(retrain, m) + + for _, kw := range m.Keywords { + keywords[kw] = struct{}{} + } + } + + err := tx.Update(&mbSrc) + x.Checkf(ctx, err, "updating source mailbox counts") + + changes = append(changes, mbSrc.ChangeCounts(), mbDst.ChangeCounts()) + + // Ensure destination mailbox has keywords of the moved messages. + var mbKwChanged bool + mbDst.Keywords, mbKwChanged = store.MergeKeywords(mbDst.Keywords, maps.Keys(keywords)) + if mbKwChanged { + changes = append(changes, mbDst.ChangeKeywords()) + } + + err = tx.Update(&mbDst) + x.Checkf(ctx, err, "updating mailbox with uidnext") + + err = acc.RetrainMessages(ctx, log, tx, retrain, false) + x.Checkf(ctx, err, "retraining messages after move") + }) + + // Ensure UIDs of the removed message are in increasing order. It is quite common + // for all messages to be from a single source mailbox, meaning this is just one + // change, for which we preallocated space. + for _, ch := range removeChanges { + sort.Slice(ch.UIDs, func(i, j int) bool { + return ch.UIDs[i] < ch.UIDs[j] + }) + changes = append(changes, ch) + } + store.BroadcastChanges(acc, changes) + }) +} + +func isText(p message.Part) bool { + return p.MediaType == "" && p.MediaSubType == "" || p.MediaType == "TEXT" && p.MediaSubType == "PLAIN" +} + +func isHTML(p message.Part) bool { + return p.MediaType == "" && p.MediaSubType == "" || p.MediaType == "TEXT" && p.MediaSubType == "HTML" +} + +func isAlternative(p message.Part) bool { + return p.MediaType == "MULTIPART" && p.MediaSubType == "ALTERNATIVE" +} + +func readPart(p message.Part, maxSize int64) (string, error) { + buf, err := io.ReadAll(io.LimitReader(p.ReaderUTF8OrBinary(), maxSize)) + if err != nil { + return "", fmt.Errorf("reading part contents: %v", err) + } + return string(buf), nil +} + +// ReadableParts returns the contents of the first text and/or html parts, +// descending into multiparts, truncated to maxSize bytes if longer. +func ReadableParts(p message.Part, maxSize int64) (text string, html string, found bool, err error) { + // todo: may want to merge this logic with webmail's message parsing. + + // For non-multipart messages, top-level part. + if isText(p) { + data, err := readPart(p, maxSize) + return data, "", true, err + } else if isHTML(p) { + data, err := readPart(p, maxSize) + return "", data, true, err + } + + // Look in sub-parts. Stop when we have a readable part, don't continue with other + // subparts unless we have a multipart/alternative. + // todo: we may have to look at disposition "inline". + var haveText, haveHTML bool + for _, pp := range p.Parts { + if isText(pp) { + haveText = true + text, err = readPart(pp, maxSize) + if !isAlternative(p) { + break + } + } else if isHTML(pp) { + haveHTML = true + html, err = readPart(pp, maxSize) + if !isAlternative(p) { + break + } + } + } + if haveText || haveHTML { + return text, html, true, err + } + + // Descend into the subparts. + for _, pp := range p.Parts { + text, html, found, err = ReadableParts(pp, maxSize) + if found { + break + } + } + return +} diff --git a/website/features/index.md b/website/features/index.md index 53e01b9..b8bd569 100644 --- a/website/features/index.md +++ b/website/features/index.md @@ -307,6 +307,35 @@ and can enable REQUIRETLS by default. See [webmail screenshots](../screenshots/#hdr-webmail). +## Webapi and webhooks + +The webapi and webhooks make it easy to send/receive transactional email with +only HTTP/JSON, not requiring detailed knowledge of and/or libraries for +composing email messages (internet message format, IMF), SMTP for submission, +and IMAP for handling incoming messages including delivery status notifications +(DSNs). + +Outgoing webhooks notify about events for outgoing deliveries (such as +"delivered", "delayed", "failed", "suppressed"). + +Incoming webhooks notify about incoming deliveries. + +The webapi can be used to submit messages to the queue, and to process incoming +messages, for example by moving them to another mailbox, setting/clearing flags +or deleting them. + +Per-account suppression lists, automatically managed based on SMTP status codes +and DSN messages, protect the reputation of your mail server. + +For API documentation and examples of the webapi and webhooks, see +https://pkg.go.dev/github.com/mjl-/mox/webapi/. Earlier mox versions can be +selected in the top left (at the time of writing). + +The mox webapi endpoint at /webapi/v0/ lists available methods and links to +them, each method page showing an example request and response JSON object and +lets you call the method. + + ## Internationalized email Originally, email addresses were ASCII-only. An email address consists of a diff --git a/website/website.go b/website/website.go index 29b283d..14f562d 100644 --- a/website/website.go +++ b/website/website.go @@ -481,6 +481,7 @@ h2 { background: linear-gradient(90deg, #6dd5fd 0%, #77e8e3 100%); display: inli External links: