mox/webaccount/account.go

672 lines
22 KiB
Go
Raw Normal View History

// Package webaccount provides a web app for users to view and change their account
// settings, and to import/export email.
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
package webaccount
2023-01-30 16:27:06 +03:00
import (
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"bytes"
2023-01-30 16:27:06 +03:00
"context"
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
cryptorand "crypto/rand"
2023-01-30 16:27:06 +03:00
"encoding/base64"
"encoding/json"
2023-01-30 16:27:06 +03:00
"errors"
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
"fmt"
2023-01-30 16:27:06 +03:00
"io"
"log/slog"
2023-01-30 16:27:06 +03:00
"net/http"
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"net/url"
2023-01-30 16:27:06 +03:00
"os"
"path/filepath"
2023-01-30 16:27:06 +03:00
"strings"
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"time"
2023-01-30 16:27:06 +03:00
_ "embed"
2024-03-11 16:02:35 +03:00
"github.com/mjl-/bstore"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/sherpa"
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
"github.com/mjl-/sherpadoc"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/sherpaprom"
"github.com/mjl-/mox/config"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/moxvar"
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"github.com/mjl-/mox/queue"
"github.com/mjl-/mox/smtp"
2023-01-30 16:27:06 +03:00
"github.com/mjl-/mox/store"
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"github.com/mjl-/mox/webapi"
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
"github.com/mjl-/mox/webauth"
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"github.com/mjl-/mox/webhook"
"github.com/mjl-/mox/webops"
2023-01-30 16:27:06 +03:00
)
var pkglog = mlog.New("webaccount", nil)
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
//go:embed api.json
2023-01-30 16:27:06 +03:00
var accountapiJSON []byte
//go:embed account.html
var accountHTML []byte
//go:embed account.js
var accountJS []byte
var webaccountFile = &mox.WebappFile{
HTML: accountHTML,
JS: accountJS,
HTMLPath: filepath.FromSlash("webaccount/account.html"),
JSPath: filepath.FromSlash("webaccount/account.js"),
}
var accountDoc = mustParseAPI("account", accountapiJSON)
2023-01-30 16:27:06 +03:00
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
func mustParseAPI(api string, buf []byte) (doc sherpadoc.Section) {
err := json.Unmarshal(buf, &doc)
if err != nil {
pkglog.Fatalx("parsing webaccount api docs", err, slog.String("api", api))
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
}
return doc
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
var sherpaHandlerOpts *sherpa.HandlerOpts
func makeSherpaHandler(cookiePath string, isForwarded bool) (http.Handler, error) {
return sherpa.NewHandler("/api/", moxvar.Version, Account{cookiePath, isForwarded}, &accountDoc, sherpaHandlerOpts)
}
2023-01-30 16:27:06 +03:00
func init() {
collector, err := sherpaprom.NewCollector("moxaccount", nil)
if err != nil {
pkglog.Fatalx("creating sherpa prometheus collector", err)
2023-01-30 16:27:06 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
sherpaHandlerOpts = &sherpa.HandlerOpts{Collector: collector, AdjustFunctionNames: "none", NoCORS: true}
// Just to validate.
_, err = makeSherpaHandler("", false)
2023-01-30 16:27:06 +03:00
if err != nil {
pkglog.Fatalx("sherpa handler", err)
2023-01-30 16:27:06 +03:00
}
improve http request handling for internal services and multiple domains per listener, you could enable the admin/account/webmail/webapi handlers. but that would serve those services on their configured paths (/admin/, /, /webmail/, /webapi/) on all domains mox would be webserving, including any non-mail domains. so your www.example/admin/ would be serving the admin web interface, with no way to disabled that. with this change, the admin interface is only served on requests to (based on Host header): - ip addresses - the listener host name (explicitly configured in the listener, with fallback to global hostname) - "localhost" (for ssh tunnel/forwarding scenario's) the account/webmail/webapi interfaces are served on the same domains as the admin interface, and additionally: - the client settings domains, as optionally configured in each Domain in domains.conf. typically "mail.<yourdomain>". this means the internal services are no longer served on other domains configured in the webserver, e.g. www.example.org/admin/ will not be handled specially. the order of evaluation of routes/services is also changed: before this change, the internal handlers would always be evaluated first. with this change, only the system handlers for MTA-STS/autoconfig/ACME-validation will be evaluated first. then the webserver handlers. and finally the internal services (admin/account/webmail/webapi). this allows an admin to configure overrides for some of the domains (per hostname-matching rules explained above) that would normally serve these services. webserver handlers can now be configured that pass the request to an internal service: in addition to the existing static/redirect/forward config options, there is now an "internal" config option, naming the service (admin/account/webmail/webapi) for handling the request. this allows enabling the internal services on custom domains. for issue #160 by TragicLifeHu, thanks for reporting!
2024-05-11 12:13:14 +03:00
mox.NewWebaccountHandler = func(basePath string, isForwarded bool) http.Handler {
return http.HandlerFunc(Handler(basePath, isForwarded))
}
2023-01-30 16:27:06 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
// Handler returns a handler for the webaccount endpoints, customized for the
// cookiePath.
func Handler(cookiePath string, isForwarded bool) func(w http.ResponseWriter, r *http.Request) {
sh, err := makeSherpaHandler(cookiePath, isForwarded)
return func(w http.ResponseWriter, r *http.Request) {
if err != nil {
http.Error(w, "500 - internal server error - cannot handle requests", http.StatusInternalServerError)
return
}
handle(sh, isForwarded, w, r)
}
}
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
func xcheckf(ctx context.Context, err error, format string, args ...any) {
if err == nil {
return
}
// If caller tried saving a config that is invalid, or because of a bad request, cause a user error.
if errors.Is(err, mox.ErrConfig) || errors.Is(err, mox.ErrRequest) {
xcheckuserf(ctx, err, format, args...)
}
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
msg := fmt.Sprintf(format, args...)
errmsg := fmt.Sprintf("%s: %s", msg, err)
pkglog.WithContext(ctx).Errorx(msg, err)
code := "server:error"
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
code = "user:error"
}
panic(&sherpa.Error{Code: code, Message: errmsg})
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
}
func xcheckuserf(ctx context.Context, err error, format string, args ...any) {
if err == nil {
return
}
msg := fmt.Sprintf(format, args...)
errmsg := fmt.Sprintf("%s: %s", msg, err)
pkglog.WithContext(ctx).Errorx(msg, err)
panic(&sherpa.Error{Code: "user:error", Message: errmsg})
}
2023-01-30 16:27:06 +03:00
// Account exports web API functions for the account web interface. All its
// methods are exported under api/. Function calls require valid HTTP
2023-01-30 16:27:06 +03:00
// Authentication credentials of a user.
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
type Account struct {
cookiePath string // From listener, for setting authentication cookies.
isForwarded bool // From listener, whether we look at X-Forwarded-* headers.
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
func handle(apiHandler http.Handler, isForwarded bool, w http.ResponseWriter, r *http.Request) {
ctx := context.WithValue(r.Context(), mlog.CidKey, mox.Cid())
log := pkglog.WithContext(ctx).With(slog.String("userauth", ""))
// Without authentication. The token is unguessable.
if r.URL.Path == "/importprogress" {
if r.Method != "GET" {
http.Error(w, "405 - method not allowed - get required", http.StatusMethodNotAllowed)
return
}
q := r.URL.Query()
token := q.Get("token")
if token == "" {
http.Error(w, "400 - bad request - missing token", http.StatusBadRequest)
return
}
flusher, ok := w.(http.Flusher)
if !ok {
log.Error("internal error: ResponseWriter not a http.Flusher")
http.Error(w, "500 - internal error - cannot access underlying connection", 500)
return
}
l := importListener{token, make(chan importEvent, 100), make(chan bool, 1)}
importers.Register <- &l
ok = <-l.Register
if !ok {
http.Error(w, "400 - bad request - unknown token, import may have finished more than a minute ago", http.StatusBadRequest)
return
}
defer func() {
importers.Unregister <- &l
}()
h := w.Header()
h.Set("Content-Type", "text/event-stream")
h.Set("Cache-Control", "no-cache")
_, err := w.Write([]byte(": keepalive\n\n"))
if err != nil {
return
}
flusher.Flush()
cctx := r.Context()
for {
select {
case e := <-l.Events:
_, err := w.Write(e.SSEMsg)
flusher.Flush()
if err != nil {
return
}
case <-cctx.Done():
return
}
}
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
// HTML/JS can be retrieved without authentication.
if r.URL.Path == "/" {
switch r.Method {
case "GET", "HEAD":
webaccountFile.Serve(ctx, log, w, r)
default:
http.Error(w, "405 - method not allowed - use get", http.StatusMethodNotAllowed)
}
2023-01-30 16:27:06 +03:00
return
} else if r.URL.Path == "/licenses.txt" {
switch r.Method {
case "GET", "HEAD":
mox.LicensesWrite(w)
default:
http.Error(w, "405 - method not allowed - use get", http.StatusMethodNotAllowed)
}
return
2023-01-30 16:27:06 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
isAPI := strings.HasPrefix(r.URL.Path, "/api/")
// Only allow POST for calls, they will not work cross-domain without CORS.
if isAPI && r.URL.Path != "/api/" && r.Method != "POST" {
http.Error(w, "405 - method not allowed - use post", http.StatusMethodNotAllowed)
return
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
var loginAddress, accName string
var sessionToken store.SessionToken
// All other URLs, except the login endpoint require some authentication.
if r.URL.Path != "/api/LoginPrep" && r.URL.Path != "/api/Login" {
var ok bool
isExport := r.URL.Path == "/export"
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
requireCSRF := isAPI || r.URL.Path == "/import" || isExport
accName, sessionToken, loginAddress, ok = webauth.Check(ctx, log, webauth.Accounts, "webaccount", isForwarded, w, r, isAPI, requireCSRF, isExport)
if !ok {
// Response has been written already.
return
2023-01-30 16:27:06 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
if isAPI {
reqInfo := requestInfo{loginAddress, accName, sessionToken, w, r}
ctx = context.WithValue(ctx, requestInfoCtxKey, reqInfo)
apiHandler.ServeHTTP(w, r.WithContext(ctx))
return
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
}
switch r.URL.Path {
case "/export":
webops.Export(log, accName, w, r)
case "/import":
if r.Method != "POST" {
http.Error(w, "405 - method not allowed - post required", http.StatusMethodNotAllowed)
return
}
f, _, err := r.FormFile("file")
if err != nil {
if errors.Is(err, http.ErrMissingFile) {
http.Error(w, "400 - bad request - missing file", http.StatusBadRequest)
} else {
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
}
return
}
defer func() {
err := f.Close()
log.Check(err, "closing form file")
}()
skipMailboxPrefix := r.FormValue("skipMailboxPrefix")
tmpf, err := os.CreateTemp("", "mox-import")
if err != nil {
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
return
}
defer func() {
if tmpf != nil {
store.CloseRemoveTempFile(log, tmpf, "upload")
}
}()
if _, err := io.Copy(tmpf, f); err != nil {
log.Errorx("copying import to temporary file", err)
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
return
}
token, isUserError, err := importStart(log, accName, tmpf, skipMailboxPrefix)
if err != nil {
log.Errorx("starting import", err, slog.Bool("usererror", isUserError))
if isUserError {
http.Error(w, "400 - bad request - "+err.Error(), http.StatusBadRequest)
} else {
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
}
return
}
make mox compile on windows, without "mox serve" but with working "mox localserve" getting mox to compile required changing code in only a few places where package "syscall" was used: for accessing file access times and for umask handling. an open problem is how to start a process as an unprivileged user on windows. that's why "mox serve" isn't implemented yet. and just finding a way to implement it now may not be good enough in the near future: we may want to starting using a more complete privilege separation approach, with a process handling sensitive tasks (handling private keys, authentication), where we may want to pass file descriptors between processes. how would that work on windows? anyway, getting mox to compile for windows doesn't mean it works properly on windows. the largest issue: mox would normally open a file, rename or remove it, and finally close it. this happens during message delivery. that doesn't work on windows, the rename/remove would fail because the file is still open. so this commit swaps many "remove" and "close" calls. renames are a longer story: message delivery had two ways to deliver: with "consuming" the (temporary) message file (which would rename it to its final destination), and without consuming (by hardlinking the file, falling back to copying). the last delivery to a recipient of a message (and the only one in the common case of a single recipient) would consume the message, and the earlier recipients would not. during delivery, the already open message file was used, to parse the message. we still want to use that open message file, and the caller now stays responsible for closing it, but we no longer try to rename (consume) the file. we always hardlink (or copy) during delivery (this works on windows), and the caller is responsible for closing and removing (in that order) the original temporary file. this does cost one syscall more. but it makes the delivery code (responsibilities) a bit simpler. there is one more obvious issue: the file system path separator. mox already used the "filepath" package to join paths in many places, but not everywhere. and it still used strings with slashes for local file access. with this commit, the code now uses filepath.FromSlash for path strings with slashes, uses "filepath" in a few more places where it previously didn't. also switches from "filepath" to regular "path" package when handling mailbox names in a few places, because those always use forward slashes, regardless of local file system conventions. windows can handle forward slashes when opening files, so test code that passes path strings with forward slashes straight to go stdlib file i/o functions are left unchanged to reduce code churn. the regular non-test code, or test code that uses path strings in places other than standard i/o functions, does have the paths converted for consistent paths (otherwise we would end up with paths with mixed forward/backward slashes in log messages). windows cannot dup a listening socket. for "mox localserve", it isn't important, and we can work around the issue. the current approach for "mox serve" (forking a process and passing file descriptors of listening sockets on "privileged" ports) won't work on windows. perhaps it isn't needed on windows, and any user can listen on "privileged" ports? that would be welcome. on windows, os.Open cannot open a directory, so we cannot call Sync on it after message delivery. a cursory internet search indicates that directories cannot be synced on windows. the story is probably much more nuanced than that, with long deep technical details/discussions/disagreement/confusion, like on unix. for "mox localserve" we can get away with making syncdir a no-op.
2023-10-14 11:54:07 +03:00
tmpf = nil // importStart is now responsible for cleanup.
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(ImportProgress{Token: token})
default:
http.NotFound(w, r)
2023-01-30 16:27:06 +03:00
}
}
// ImportProgress is returned after uploading a file to import.
type ImportProgress struct {
// For fetching progress, or cancelling an import.
Token string
}
2023-01-30 16:27:06 +03:00
type ctxKey string
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
var requestInfoCtxKey ctxKey = "requestInfo"
type requestInfo struct {
LoginAddress string
AccountName string
SessionToken store.SessionToken
Response http.ResponseWriter
Request *http.Request // For Proto and TLS connection state during message submit.
}
// LoginPrep returns a login token, and also sets it as cookie. Both must be
// present in the call to Login.
func (w Account) LoginPrep(ctx context.Context) string {
log := pkglog.WithContext(ctx)
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
var data [8]byte
_, err := cryptorand.Read(data[:])
xcheckf(ctx, err, "generate token")
loginToken := base64.RawURLEncoding.EncodeToString(data[:])
webauth.LoginPrep(ctx, log, "webaccount", w.cookiePath, w.isForwarded, reqInfo.Response, reqInfo.Request, loginToken)
return loginToken
}
// Login returns a session token for the credentials, or fails with error code
// "user:badLogin". Call LoginPrep to get a loginToken.
func (w Account) Login(ctx context.Context, loginToken, username, password string) store.CSRFToken {
log := pkglog.WithContext(ctx)
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
csrfToken, err := webauth.Login(ctx, log, webauth.Accounts, "webaccount", w.cookiePath, w.isForwarded, reqInfo.Response, reqInfo.Request, loginToken, username, password)
if _, ok := err.(*sherpa.Error); ok {
panic(err)
}
xcheckf(ctx, err, "login")
return csrfToken
}
// Logout invalidates the session token.
func (w Account) Logout(ctx context.Context) {
log := pkglog.WithContext(ctx)
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
err := webauth.Logout(ctx, log, webauth.Accounts, "webaccount", w.cookiePath, w.isForwarded, reqInfo.Response, reqInfo.Request, reqInfo.AccountName, reqInfo.SessionToken)
xcheckf(ctx, err, "logout")
}
2023-01-30 16:27:06 +03:00
// SetPassword saves a new password for the account, invalidating the previous password.
// Sessions are not interrupted, and will keep working. New login attempts must use the new password.
// Password must be at least 8 characters.
func (Account) SetPassword(ctx context.Context, password string) {
log := pkglog.WithContext(ctx)
2023-01-30 16:27:06 +03:00
if len(password) < 8 {
panic(&sherpa.Error{Code: "user:error", Message: "password must be at least 8 characters"})
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
acc, err := store.OpenAccount(log, reqInfo.AccountName)
2023-01-30 16:27:06 +03:00
xcheckf(ctx, err, "open account")
defer func() {
err := acc.Close()
log.Check(err, "closing account")
}()
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
// Retrieve session, resetting password invalidates it.
ls, err := store.SessionUse(ctx, log, reqInfo.AccountName, reqInfo.SessionToken, "")
xcheckf(ctx, err, "get session")
err = acc.SetPassword(log, password)
2023-01-30 16:27:06 +03:00
xcheckf(ctx, err, "setting password")
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
// Session has been invalidated. Add it again.
err = store.SessionAddToken(ctx, log, &ls)
xcheckf(ctx, err, "restoring session after password reset")
2023-01-30 16:27:06 +03:00
}
// Account returns information about the account.
2024-03-11 16:02:35 +03:00
// StorageUsed is the sum of the sizes of all messages, in bytes.
// StorageLimit is the maximum storage that can be used, or 0 if there is no limit.
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
func (Account) Account(ctx context.Context) (account config.Account, storageUsed, storageLimit int64, suppressions []webapi.Suppression) {
2024-03-11 16:02:35 +03:00
log := pkglog.WithContext(ctx)
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
2024-03-11 16:02:35 +03:00
acc, err := store.OpenAccount(log, reqInfo.AccountName)
xcheckf(ctx, err, "open account")
defer func() {
err := acc.Close()
log.Check(err, "closing account")
}()
var accConf config.Account
acc.WithRLock(func() {
accConf, _ = acc.Conf()
storageLimit = acc.QuotaMessageSize()
err := acc.DB.Read(ctx, func(tx *bstore.Tx) error {
du := store.DiskUsage{ID: 1}
err := tx.Get(&du)
storageUsed = du.MessageSize
return err
})
xcheckf(ctx, err, "get disk usage")
})
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
suppressions, err = queue.SuppressionList(ctx, reqInfo.AccountName)
xcheckf(ctx, err, "list suppressions")
return accConf, storageUsed, storageLimit, suppressions
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
}
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// AccountSaveFullName saves the full name (used as display name in email messages)
// for the account.
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
func (Account) AccountSaveFullName(ctx context.Context, fullName string) {
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) {
acc.FullName = fullName
})
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
xcheckf(ctx, err, "saving account full name")
}
// DestinationSave updates a destination.
// OldDest is compared against the current destination. If it does not match, an
// error is returned. Otherwise newDest is saved and the configuration reloaded.
func (Account) DestinationSave(ctx context.Context, destName string, oldDest, newDest config.Destination) {
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
err := mox.AccountSave(ctx, reqInfo.AccountName, func(conf *config.Account) {
curDest, ok := conf.Destinations[destName]
if !ok {
xcheckuserf(ctx, errors.New("not found"), "looking up destination")
}
if !curDest.Equal(oldDest) {
xcheckuserf(ctx, errors.New("modified"), "checking stored destination")
}
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// Keep fields we manage.
newDest.DMARCReports = curDest.DMARCReports
newDest.HostTLSReports = curDest.HostTLSReports
newDest.DomainTLSReports = curDest.DomainTLSReports
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// Make copy of reference values.
nd := map[string]config.Destination{}
for dn, d := range conf.Destinations {
nd[dn] = d
}
nd[destName] = newDest
conf.Destinations = nd
})
xcheckf(ctx, err, "saving destination")
}
// ImportAbort aborts an import that is in progress. If the import exists and isn't
// finished, no changes will have been made by the import.
func (Account) ImportAbort(ctx context.Context, importToken string) error {
req := importAbortRequest{importToken, make(chan error)}
importers.Abort <- req
return <-req.Response
}
// Types exposes types not used in API method signatures, such as the import form upload.
func (Account) Types() (importProgress ImportProgress) {
return
}
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// SuppressionList lists the addresses on the suppression list of this account.
func (Account) SuppressionList(ctx context.Context) (suppressions []webapi.Suppression) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
l, err := queue.SuppressionList(ctx, reqInfo.AccountName)
xcheckf(ctx, err, "list suppressions")
return l
}
// SuppressionAdd adds an email address to the suppression list.
func (Account) SuppressionAdd(ctx context.Context, address string, manual bool, reason string) (suppression webapi.Suppression) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
addr, err := smtp.ParseAddress(address)
xcheckuserf(ctx, err, "parsing address")
sup := webapi.Suppression{
Account: reqInfo.AccountName,
Manual: manual,
Reason: reason,
}
err = queue.SuppressionAdd(ctx, addr.Path(), &sup)
if err != nil && errors.Is(err, bstore.ErrUnique) {
xcheckuserf(ctx, err, "add suppression")
}
xcheckf(ctx, err, "add suppression")
return sup
}
// SuppressionRemove removes the email address from the suppression list.
func (Account) SuppressionRemove(ctx context.Context, address string) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
addr, err := smtp.ParseAddress(address)
xcheckuserf(ctx, err, "parsing address")
err = queue.SuppressionRemove(ctx, reqInfo.AccountName, addr.Path())
if err != nil && err == bstore.ErrAbsent {
xcheckuserf(ctx, err, "remove suppression")
}
xcheckf(ctx, err, "remove suppression")
}
// OutgoingWebhookSave saves a new webhook url for outgoing deliveries. If url
// is empty, the webhook is disabled. If authorization is non-empty it is used for
// the Authorization header in HTTP requests. Events specifies the outgoing events
// to be delivered, or all if empty/nil.
func (Account) OutgoingWebhookSave(ctx context.Context, url, authorization string, events []string) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) {
if url == "" {
acc.OutgoingWebhook = nil
} else {
acc.OutgoingWebhook = &config.OutgoingWebhook{URL: url, Authorization: authorization, Events: events}
}
})
xcheckf(ctx, err, "saving account outgoing webhook")
}
// OutgoingWebhookTest makes a test webhook call to urlStr, with optional
// authorization. If the HTTP request is made this call will succeed also for
// non-2xx HTTP status codes.
func (Account) OutgoingWebhookTest(ctx context.Context, urlStr, authorization string, data webhook.Outgoing) (code int, response string, errmsg string) {
log := pkglog.WithContext(ctx)
xvalidURL(ctx, urlStr)
log.Debug("making webhook test call for outgoing message", slog.String("url", urlStr))
var b bytes.Buffer
enc := json.NewEncoder(&b)
enc.SetIndent("", "\t")
enc.SetEscapeHTML(false)
err := enc.Encode(data)
xcheckf(ctx, err, "encoding outgoing webhook data")
code, response, err = queue.HookPost(ctx, log, 1, 1, urlStr, authorization, b.String())
if err != nil {
errmsg = err.Error()
}
log.Debugx("result for webhook test call for outgoing message", err, slog.Int("code", code), slog.String("response", response))
return code, response, errmsg
}
// IncomingWebhookSave saves a new webhook url for incoming deliveries. If url is
// empty, the webhook is disabled. If authorization is not empty, it is used in
// the Authorization header in requests.
func (Account) IncomingWebhookSave(ctx context.Context, url, authorization string) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) {
if url == "" {
acc.IncomingWebhook = nil
} else {
acc.IncomingWebhook = &config.IncomingWebhook{URL: url, Authorization: authorization}
}
})
xcheckf(ctx, err, "saving account incoming webhook")
}
func xvalidURL(ctx context.Context, s string) {
u, err := url.Parse(s)
xcheckuserf(ctx, err, "parsing url")
if u.Scheme != "http" && u.Scheme != "https" {
xcheckuserf(ctx, errors.New("scheme must be http or https"), "parsing url")
}
}
// IncomingWebhookTest makes a test webhook HTTP delivery request to urlStr,
// with optional authorization header. If the HTTP call is made, this function
// returns non-error regardless of HTTP status code.
func (Account) IncomingWebhookTest(ctx context.Context, urlStr, authorization string, data webhook.Incoming) (code int, response string, errmsg string) {
log := pkglog.WithContext(ctx)
xvalidURL(ctx, urlStr)
log.Debug("making webhook test call for incoming message", slog.String("url", urlStr))
var b bytes.Buffer
enc := json.NewEncoder(&b)
enc.SetEscapeHTML(false)
enc.SetIndent("", "\t")
err := enc.Encode(data)
xcheckf(ctx, err, "encoding incoming webhook data")
code, response, err = queue.HookPost(ctx, log, 1, 1, urlStr, authorization, b.String())
if err != nil {
errmsg = err.Error()
}
log.Debugx("result for webhook test call for incoming message", err, slog.Int("code", code), slog.String("response", response))
return code, response, errmsg
}
// FromIDLoginAddressesSave saves new login addresses to enable unique SMTP
// MAIL FROM addresses ("fromid") for deliveries from the queue.
func (Account) FromIDLoginAddressesSave(ctx context.Context, loginAddresses []string) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) {
acc.FromIDLoginAddresses = loginAddresses
})
xcheckf(ctx, err, "saving account fromid login addresses")
}
// KeepRetiredPeriodsSave saves periods to save retired messages and webhooks.
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
func (Account) KeepRetiredPeriodsSave(ctx context.Context, keepRetiredMessagePeriod, keepRetiredWebhookPeriod time.Duration) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) {
acc.KeepRetiredMessagePeriod = keepRetiredMessagePeriod
acc.KeepRetiredWebhookPeriod = keepRetiredWebhookPeriod
})
xcheckf(ctx, err, "saving account keep retired periods")
}
// AutomaticJunkFlagsSave saves settings for automatically marking messages as
// junk/nonjunk when moved to mailboxes matching certain regular expressions.
func (Account) AutomaticJunkFlagsSave(ctx context.Context, enabled bool, junkRegexp, neutralRegexp, notJunkRegexp string) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) {
acc.AutomaticJunkFlags = config.AutomaticJunkFlags{
Enabled: enabled,
JunkMailboxRegexp: junkRegexp,
NeutralMailboxRegexp: neutralRegexp,
NotJunkMailboxRegexp: notJunkRegexp,
}
})
xcheckf(ctx, err, "saving account automatic junk flags")
}
// JunkFilterSave saves junk filter settings. If junkFilter is nil, the junk filter
// is disabled. Otherwise all fields except Threegrams are stored.
func (Account) JunkFilterSave(ctx context.Context, junkFilter *config.JunkFilter) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) {
if junkFilter == nil {
acc.JunkFilter = nil
return
}
old := acc.JunkFilter
acc.JunkFilter = junkFilter
acc.JunkFilter.Params.Threegrams = false
if old != nil {
acc.JunkFilter.Params.Threegrams = old.Params.Threegrams
}
})
xcheckf(ctx, err, "saving account junk filter settings")
}
// RejectsSave saves the RejectsMailbox and KeepRejects settings.
func (Account) RejectsSave(ctx context.Context, mailbox string, keep bool) {
reqInfo := ctx.Value(requestInfoCtxKey).(requestInfo)
err := mox.AccountSave(ctx, reqInfo.AccountName, func(acc *config.Account) {
acc.RejectsMailbox = mailbox
acc.KeepRejects = keep
})
xcheckf(ctx, err, "saving account rejects settings")
}