mox/webaccount/account_test.go

377 lines
12 KiB
Go
Raw Normal View History

add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
package webaccount
2023-01-30 16:27:06 +03:00
import (
"archive/tar"
"archive/zip"
"bytes"
"compress/gzip"
"context"
"encoding/json"
"fmt"
"io"
"mime/multipart"
"net/http"
"net/http/httptest"
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
"net/url"
"os"
"path"
"path/filepath"
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
"runtime/debug"
"sort"
"strings"
"testing"
"github.com/mjl-/bstore"
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
"github.com/mjl-/sherpa"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/store"
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
"github.com/mjl-/mox/webauth"
)
var ctxbg = context.Background()
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
func init() {
mox.LimitersInit()
webauth.BadAuthDelay = 0
}
func tcheck(t *testing.T, err error, msg string) {
t.Helper()
if err != nil {
t.Fatalf("%s: %s", msg, err)
}
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
func readBody(r io.Reader) string {
buf, err := io.ReadAll(r)
if err != nil {
return fmt.Sprintf("read error: %s", err)
}
return fmt.Sprintf("data: %q", buf)
}
func tneedErrorCode(t *testing.T, code string, fn func()) {
t.Helper()
defer func() {
t.Helper()
x := recover()
if x == nil {
debug.PrintStack()
t.Fatalf("expected sherpa user error, saw success")
}
if err, ok := x.(*sherpa.Error); !ok {
debug.PrintStack()
t.Fatalf("expected sherpa error, saw %#v", x)
} else if err.Code != code {
debug.PrintStack()
t.Fatalf("expected sherpa error code %q, saw other sherpa error %#v", code, err)
}
}()
fn()
}
func TestAccount(t *testing.T) {
os.RemoveAll("../testdata/httpaccount/data")
make mox compile on windows, without "mox serve" but with working "mox localserve" getting mox to compile required changing code in only a few places where package "syscall" was used: for accessing file access times and for umask handling. an open problem is how to start a process as an unprivileged user on windows. that's why "mox serve" isn't implemented yet. and just finding a way to implement it now may not be good enough in the near future: we may want to starting using a more complete privilege separation approach, with a process handling sensitive tasks (handling private keys, authentication), where we may want to pass file descriptors between processes. how would that work on windows? anyway, getting mox to compile for windows doesn't mean it works properly on windows. the largest issue: mox would normally open a file, rename or remove it, and finally close it. this happens during message delivery. that doesn't work on windows, the rename/remove would fail because the file is still open. so this commit swaps many "remove" and "close" calls. renames are a longer story: message delivery had two ways to deliver: with "consuming" the (temporary) message file (which would rename it to its final destination), and without consuming (by hardlinking the file, falling back to copying). the last delivery to a recipient of a message (and the only one in the common case of a single recipient) would consume the message, and the earlier recipients would not. during delivery, the already open message file was used, to parse the message. we still want to use that open message file, and the caller now stays responsible for closing it, but we no longer try to rename (consume) the file. we always hardlink (or copy) during delivery (this works on windows), and the caller is responsible for closing and removing (in that order) the original temporary file. this does cost one syscall more. but it makes the delivery code (responsibilities) a bit simpler. there is one more obvious issue: the file system path separator. mox already used the "filepath" package to join paths in many places, but not everywhere. and it still used strings with slashes for local file access. with this commit, the code now uses filepath.FromSlash for path strings with slashes, uses "filepath" in a few more places where it previously didn't. also switches from "filepath" to regular "path" package when handling mailbox names in a few places, because those always use forward slashes, regardless of local file system conventions. windows can handle forward slashes when opening files, so test code that passes path strings with forward slashes straight to go stdlib file i/o functions are left unchanged to reduce code churn. the regular non-test code, or test code that uses path strings in places other than standard i/o functions, does have the paths converted for consistent paths (otherwise we would end up with paths with mixed forward/backward slashes in log messages). windows cannot dup a listening socket. for "mox localserve", it isn't important, and we can work around the issue. the current approach for "mox serve" (forking a process and passing file descriptors of listening sockets on "privileged" ports) won't work on windows. perhaps it isn't needed on windows, and any user can listen on "privileged" ports? that would be welcome. on windows, os.Open cannot open a directory, so we cannot call Sync on it after message delivery. a cursory internet search indicates that directories cannot be synced on windows. the story is probably much more nuanced than that, with long deep technical details/discussions/disagreement/confusion, like on unix. for "mox localserve" we can get away with making syncdir a no-op.
2023-10-14 11:54:07 +03:00
mox.ConfigStaticPath = filepath.FromSlash("../testdata/httpaccount/mox.conf")
mox.ConfigDynamicPath = filepath.Join(filepath.Dir(mox.ConfigStaticPath), "domains.conf")
mox.MustLoadConfig(true, false)
log := mlog.New("webaccount", nil)
acc, err := store.OpenAccount(log, "mjl☺")
tcheck(t, err, "open account")
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
err = acc.SetPassword(log, "test1234")
tcheck(t, err, "set password")
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
defer func() {
err = acc.Close()
tcheck(t, err, "closing account")
}()
defer store.Switchboard()()
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
api := Account{cookiePath: "/account/"}
apiHandler, err := makeSherpaHandler(api.cookiePath, false)
tcheck(t, err, "sherpa handler")
// Record HTTP response to get session cookie for login.
respRec := httptest.NewRecorder()
reqInfo := requestInfo{"", "", "", respRec, &http.Request{RemoteAddr: "127.0.0.1:1234"}}
ctx := context.WithValue(ctxbg, requestInfoCtxKey, reqInfo)
// Missing login token.
tneedErrorCode(t, "user:error", func() { api.Login(ctx, "", "mjl☺@mox.example", "test1234") })
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
// Login with loginToken.
loginCookie := &http.Cookie{Name: "webaccountlogin"}
loginCookie.Value = api.LoginPrep(ctx)
reqInfo.Request.Header = http.Header{"Cookie": []string{loginCookie.String()}}
csrfToken := api.Login(ctx, loginCookie.Value, "mjl☺@mox.example", "test1234")
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
var sessionCookie *http.Cookie
for _, c := range respRec.Result().Cookies() {
if c.Name == "webaccountsession" {
sessionCookie = c
break
}
}
if sessionCookie == nil {
t.Fatalf("missing session cookie")
}
// Valid loginToken, but bad credentials.
loginCookie.Value = api.LoginPrep(ctx)
reqInfo.Request.Header = http.Header{"Cookie": []string{loginCookie.String()}}
tneedErrorCode(t, "user:loginFailed", func() { api.Login(ctx, loginCookie.Value, "mjl☺@mox.example", "badauth") })
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
tneedErrorCode(t, "user:loginFailed", func() { api.Login(ctx, loginCookie.Value, "baduser@mox.example", "badauth") })
tneedErrorCode(t, "user:loginFailed", func() { api.Login(ctx, loginCookie.Value, "baduser@baddomain.example", "badauth") })
type httpHeaders [][2]string
ctJSON := [2]string{"Content-Type", "application/json; charset=utf-8"}
cookieOK := &http.Cookie{Name: "webaccountsession", Value: sessionCookie.Value}
cookieBad := &http.Cookie{Name: "webaccountsession", Value: "AAAAAAAAAAAAAAAAAAAAAA mjl"}
hdrSessionOK := [2]string{"Cookie", cookieOK.String()}
hdrSessionBad := [2]string{"Cookie", cookieBad.String()}
hdrCSRFOK := [2]string{"x-mox-csrf", string(csrfToken)}
hdrCSRFBad := [2]string{"x-mox-csrf", "AAAAAAAAAAAAAAAAAAAAAA"}
testHTTP := func(method, path string, headers httpHeaders, expStatusCode int, expHeaders httpHeaders, check func(resp *http.Response)) {
t.Helper()
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
req := httptest.NewRequest(method, path, nil)
for _, kv := range headers {
req.Header.Add(kv[0], kv[1])
}
rr := httptest.NewRecorder()
rr.Body = &bytes.Buffer{}
handle(apiHandler, false, rr, req)
if rr.Code != expStatusCode {
t.Fatalf("got status %d, expected %d (%s)", rr.Code, expStatusCode, readBody(rr.Body))
}
resp := rr.Result()
for _, h := range expHeaders {
if resp.Header.Get(h[0]) != h[1] {
t.Fatalf("for header %q got value %q, expected %q", h[0], resp.Header.Get(h[0]), h[1])
}
}
if check != nil {
check(resp)
}
}
testHTTPAuthAPI := func(method, path string, expStatusCode int, expHeaders httpHeaders, check func(resp *http.Response)) {
t.Helper()
testHTTP(method, path, httpHeaders{hdrCSRFOK, hdrSessionOK}, expStatusCode, expHeaders, check)
}
userAuthError := func(resp *http.Response, expCode string) {
t.Helper()
var response struct {
Error *sherpa.Error `json:"error"`
}
err := json.NewDecoder(resp.Body).Decode(&response)
tcheck(t, err, "parsing response as json")
if response.Error == nil {
t.Fatalf("expected sherpa error with code %s, no error", expCode)
}
if response.Error.Code != expCode {
t.Fatalf("got sherpa error code %q, expected %s", response.Error.Code, expCode)
}
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
badAuth := func(resp *http.Response) {
t.Helper()
userAuthError(resp, "user:badAuth")
}
noAuth := func(resp *http.Response) {
t.Helper()
userAuthError(resp, "user:noAuth")
}
testHTTP("POST", "/api/Bogus", httpHeaders{}, http.StatusOK, nil, noAuth)
testHTTP("POST", "/api/Bogus", httpHeaders{hdrCSRFBad}, http.StatusOK, nil, noAuth)
testHTTP("POST", "/api/Bogus", httpHeaders{hdrSessionBad}, http.StatusOK, nil, noAuth)
testHTTP("POST", "/api/Bogus", httpHeaders{hdrCSRFBad, hdrSessionBad}, http.StatusOK, nil, badAuth)
testHTTP("POST", "/api/Bogus", httpHeaders{hdrCSRFOK}, http.StatusOK, nil, noAuth)
testHTTP("POST", "/api/Bogus", httpHeaders{hdrSessionOK}, http.StatusOK, nil, noAuth)
testHTTP("POST", "/api/Bogus", httpHeaders{hdrCSRFBad, hdrSessionOK}, http.StatusOK, nil, badAuth)
testHTTP("POST", "/api/Bogus", httpHeaders{hdrCSRFOK, hdrSessionBad}, http.StatusOK, nil, badAuth)
testHTTPAuthAPI("GET", "/api/Types", http.StatusMethodNotAllowed, nil, nil)
testHTTPAuthAPI("POST", "/api/Types", http.StatusOK, httpHeaders{ctJSON}, nil)
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
testHTTP("POST", "/import", httpHeaders{}, http.StatusForbidden, nil, nil)
testHTTP("POST", "/import", httpHeaders{hdrSessionBad}, http.StatusForbidden, nil, nil)
testHTTP("GET", "/export/mail-export-maildir.tgz", httpHeaders{}, http.StatusForbidden, nil, nil)
testHTTP("GET", "/export/mail-export-maildir.tgz", httpHeaders{hdrSessionBad}, http.StatusForbidden, nil, nil)
testHTTP("GET", "/export/mail-export-maildir.tgz", httpHeaders{hdrSessionOK}, http.StatusForbidden, nil, nil)
testHTTP("GET", "/export/mail-export-maildir.zip", httpHeaders{}, http.StatusForbidden, nil, nil)
testHTTP("GET", "/export/mail-export-mbox.tgz", httpHeaders{}, http.StatusForbidden, nil, nil)
testHTTP("GET", "/export/mail-export-mbox.zip", httpHeaders{}, http.StatusForbidden, nil, nil)
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
// SetPassword needs the token.
sessionToken := store.SessionToken(strings.SplitN(sessionCookie.Value, " ", 2)[0])
reqInfo = requestInfo{"mjl☺@mox.example", "mjl☺", sessionToken, respRec, &http.Request{RemoteAddr: "127.0.0.1:1234"}}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
ctx = context.WithValue(ctxbg, requestInfoCtxKey, reqInfo)
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
api.SetPassword(ctx, "test1234")
2024-03-11 16:02:35 +03:00
fullName, _, dests, _, _ := api.Account(ctx)
api.DestinationSave(ctx, "mjl☺@mox.example", dests["mjl☺@mox.example"], dests["mjl☺@mox.example"]) // todo: save modified value and compare it afterwards
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
api.AccountSaveFullName(ctx, fullName+" changed") // todo: check if value was changed
api.AccountSaveFullName(ctx, fullName)
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
go ImportManage()
change mox to start as root, bind to network sockets, then drop to regular unprivileged mox user makes it easier to run on bsd's, where you cannot (easily?) let non-root users bind to ports <1024. starting as root also paves the way for future improvements with privilege separation. unfortunately, this requires changes to how you start mox. though mox will help by automatically fix up dir/file permissions/ownership. if you start mox from the systemd unit file, you should update it so it starts as root and adds a few additional capabilities: # first update the mox binary, then, as root: ./mox config printservice >mox.service systemctl daemon-reload systemctl restart mox journalctl -f -u mox & # you should see mox start up, with messages about fixing permissions on dirs/files. if you used the recommended config/ and data/ directory, in a directory just for mox, and with the mox user called "mox", this should be enough. if you don't want mox to modify dir/file permissions, set "NoFixPermissions: true" in mox.conf. if you named the mox user something else than mox, e.g. "_mox", add "User: _mox" to mox.conf. if you created a shared service user as originally suggested, you may want to get rid of that as it is no longer useful and may get in the way. e.g. if you had /home/service/mox with a "service" user, that service user can no longer access any files: only mox and root can. this also adds scripts for building mox docker images for alpine-supported platforms. the "restart" subcommand has been removed. it wasn't all that useful and got in the way. and another change: when adding a domain while mtasts isn't enabled, don't add the per-domain mtasts config, as it would cause failure to add the domain. based on report from setting up mox on openbsd from mteege. and based on issue #3. thanks for the feedback!
2023-02-27 14:19:55 +03:00
// Import mbox/maildir tgz/zip.
testImport := func(filename string, expect int) {
t.Helper()
var reqBody bytes.Buffer
mpw := multipart.NewWriter(&reqBody)
part, err := mpw.CreateFormFile("file", path.Base(filename))
tcheck(t, err, "creating form file")
buf, err := os.ReadFile(filename)
tcheck(t, err, "reading file")
_, err = part.Write(buf)
tcheck(t, err, "write part")
err = mpw.Close()
tcheck(t, err, "close multipart writer")
r := httptest.NewRequest("POST", "/import", &reqBody)
r.Header.Add("Content-Type", mpw.FormDataContentType())
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
r.Header.Add("x-mox-csrf", string(csrfToken))
r.Header.Add("Cookie", cookieOK.String())
w := httptest.NewRecorder()
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
handle(apiHandler, false, w, r)
if w.Code != http.StatusOK {
t.Fatalf("import, got status code %d, expected 200: %s", w.Code, w.Body.Bytes())
}
var m ImportProgress
if err := json.Unmarshal(w.Body.Bytes(), &m); err != nil {
t.Fatalf("parsing import response: %v", err)
}
l := importListener{m.Token, make(chan importEvent, 100), make(chan bool)}
importers.Register <- &l
if !<-l.Register {
t.Fatalf("register failed")
}
defer func() {
importers.Unregister <- &l
}()
count := 0
loop:
for {
e := <-l.Events
if e.Event == nil {
continue
}
switch x := e.Event.(type) {
case importCount:
count += x.Count
case importProblem:
t.Fatalf("unexpected problem: %q", x.Message)
implement message threading in backend and webmail we match messages to their parents based on the "references" and "in-reply-to" headers (requiring the same base subject), and in absense of those headers we also by only base subject (against messages received max 4 weeks ago). we store a threadid with messages. all messages in a thread have the same threadid. messages also have a "thread parent ids", which holds all id's of parent messages up to the thread root. then there is "thread missing link", which is set when a referenced immediate parent wasn't found (but possibly earlier ancestors can still be found and will be in thread parent ids". threads can be muted: newly delivered messages are automatically marked as read/seen. threads can be marked as collapsed: if set, the webmail collapses the thread to a single item in the basic threading view (default is to expand threads). the muted and collapsed fields are copied from their parent on message delivery. the threading is implemented in the webmail. the non-threading mode still works as before. the new default threading mode "unread" automatically expands only the threads with at least one unread (not seen) meessage. the basic threading mode "on" expands all threads except when explicitly collapsed (as saved in the thread collapsed field). new shortcuts for navigation/interaction threads have been added, e.g. go to previous/next thread root, toggle collapse/expand of thread (or double click), toggle mute of thread. some previous shortcuts have changed, see the help for details. the message threading are added with an explicit account upgrade step, automatically started when an account is opened. the upgrade is done in the background because it will take too long for large mailboxes to block account operations. the upgrade takes two steps: 1. updating all message records in the database to add a normalized message-id and thread base subject (with "re:", "fwd:" and several other schemes stripped). 2. going through all messages in the database again, reading the "references" and "in-reply-to" headers from disk, and matching against their parents. this second step is also done at the end of each import of mbox/maildir mailboxes. new deliveries are matched immediately against other existing messages, currently no attempt is made to rematch previously delivered messages (which could be useful for related messages being delivered out of order). the threading is not yet exposed over imap.
2023-09-13 09:51:50 +03:00
case importStep:
case importDone:
break loop
case importAborted:
t.Fatalf("unexpected aborted import")
default:
panic(fmt.Sprintf("missing case for Event %#v", e))
}
}
if count != expect {
t.Fatalf("imported %d messages, expected %d", count, expect)
}
}
make mox compile on windows, without "mox serve" but with working "mox localserve" getting mox to compile required changing code in only a few places where package "syscall" was used: for accessing file access times and for umask handling. an open problem is how to start a process as an unprivileged user on windows. that's why "mox serve" isn't implemented yet. and just finding a way to implement it now may not be good enough in the near future: we may want to starting using a more complete privilege separation approach, with a process handling sensitive tasks (handling private keys, authentication), where we may want to pass file descriptors between processes. how would that work on windows? anyway, getting mox to compile for windows doesn't mean it works properly on windows. the largest issue: mox would normally open a file, rename or remove it, and finally close it. this happens during message delivery. that doesn't work on windows, the rename/remove would fail because the file is still open. so this commit swaps many "remove" and "close" calls. renames are a longer story: message delivery had two ways to deliver: with "consuming" the (temporary) message file (which would rename it to its final destination), and without consuming (by hardlinking the file, falling back to copying). the last delivery to a recipient of a message (and the only one in the common case of a single recipient) would consume the message, and the earlier recipients would not. during delivery, the already open message file was used, to parse the message. we still want to use that open message file, and the caller now stays responsible for closing it, but we no longer try to rename (consume) the file. we always hardlink (or copy) during delivery (this works on windows), and the caller is responsible for closing and removing (in that order) the original temporary file. this does cost one syscall more. but it makes the delivery code (responsibilities) a bit simpler. there is one more obvious issue: the file system path separator. mox already used the "filepath" package to join paths in many places, but not everywhere. and it still used strings with slashes for local file access. with this commit, the code now uses filepath.FromSlash for path strings with slashes, uses "filepath" in a few more places where it previously didn't. also switches from "filepath" to regular "path" package when handling mailbox names in a few places, because those always use forward slashes, regardless of local file system conventions. windows can handle forward slashes when opening files, so test code that passes path strings with forward slashes straight to go stdlib file i/o functions are left unchanged to reduce code churn. the regular non-test code, or test code that uses path strings in places other than standard i/o functions, does have the paths converted for consistent paths (otherwise we would end up with paths with mixed forward/backward slashes in log messages). windows cannot dup a listening socket. for "mox localserve", it isn't important, and we can work around the issue. the current approach for "mox serve" (forking a process and passing file descriptors of listening sockets on "privileged" ports) won't work on windows. perhaps it isn't needed on windows, and any user can listen on "privileged" ports? that would be welcome. on windows, os.Open cannot open a directory, so we cannot call Sync on it after message delivery. a cursory internet search indicates that directories cannot be synced on windows. the story is probably much more nuanced than that, with long deep technical details/discussions/disagreement/confusion, like on unix. for "mox localserve" we can get away with making syncdir a no-op.
2023-10-14 11:54:07 +03:00
testImport(filepath.FromSlash("../testdata/importtest.mbox.zip"), 2)
testImport(filepath.FromSlash("../testdata/importtest.maildir.tgz"), 2)
// Check there are messages, with the right flags.
acc.DB.Read(ctxbg, func(tx *bstore.Tx) error {
_, err = bstore.QueryTx[store.Message](tx).FilterEqual("Expunged", false).FilterIn("Keywords", "other").FilterIn("Keywords", "test").Get()
tcheck(t, err, `fetching message with keywords "other" and "test"`)
mb, err := acc.MailboxFind(tx, "importtest")
tcheck(t, err, "looking up mailbox importtest")
if mb == nil {
t.Fatalf("missing mailbox importtest")
}
sort.Strings(mb.Keywords)
if strings.Join(mb.Keywords, " ") != "other test" {
t.Fatalf(`expected mailbox keywords "other" and "test", got %v`, mb.Keywords)
}
n, err := bstore.QueryTx[store.Message](tx).FilterEqual("Expunged", false).FilterIn("Keywords", "custom").Count()
tcheck(t, err, `fetching message with keyword "custom"`)
if n != 2 {
t.Fatalf(`got %d messages with keyword "custom", expected 2`, n)
}
mb, err = acc.MailboxFind(tx, "maildir")
tcheck(t, err, "looking up mailbox maildir")
if mb == nil {
t.Fatalf("missing mailbox maildir")
}
if strings.Join(mb.Keywords, " ") != "custom" {
t.Fatalf(`expected mailbox keywords "custom", got %v`, mb.Keywords)
}
return nil
})
testExport := func(httppath string, iszip bool, expectFiles int) {
t.Helper()
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
fields := url.Values{"csrf": []string{string(csrfToken)}}
r := httptest.NewRequest("POST", httppath, strings.NewReader(fields.Encode()))
r.Header.Set("Content-Type", "application/x-www-form-urlencoded")
r.Header.Add("Cookie", cookieOK.String())
w := httptest.NewRecorder()
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
handle(apiHandler, false, w, r)
if w.Code != http.StatusOK {
t.Fatalf("export, got status code %d, expected 200: %s", w.Code, w.Body.Bytes())
}
var count int
if iszip {
buf := w.Body.Bytes()
zr, err := zip.NewReader(bytes.NewReader(buf), int64(len(buf)))
tcheck(t, err, "reading zip")
for _, f := range zr.File {
if !strings.HasSuffix(f.Name, "/") {
count++
}
}
} else {
gzr, err := gzip.NewReader(w.Body)
tcheck(t, err, "gzip reader")
tr := tar.NewReader(gzr)
for {
h, err := tr.Next()
if err == io.EOF {
break
}
tcheck(t, err, "next file in tar")
if !strings.HasSuffix(h.Name, "/") {
count++
}
_, err = io.Copy(io.Discard, tr)
tcheck(t, err, "reading from tar")
}
}
if count != expectFiles {
t.Fatalf("export, has %d files, expected %d", count, expectFiles)
}
}
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
testExport("/export/mail-export-maildir.tgz", false, 6) // 2 mailboxes, each with 2 messages and a dovecot-keyword file
testExport("/export/mail-export-maildir.zip", true, 6)
testExport("/export/mail-export-mbox.tgz", false, 2)
testExport("/export/mail-export-mbox.zip", true, 2)
api.Logout(ctx)
tneedErrorCode(t, "server:error", func() { api.Logout(ctx) })
}