replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
/*
|
|
|
|
Package webauth handles authentication and session/csrf token management for
|
|
|
|
the web interfaces (admin, account, mail).
|
|
|
|
|
|
|
|
Authentication of web requests is through a session token in a cookie. For API
|
|
|
|
requests, and other requests where the frontend can send custom headers, a
|
|
|
|
header ("x-mox-csrf") with a CSRF token is also required and verified to belong
|
|
|
|
to the session token. For other form POSTS, a field "csrf" is required. Session
|
|
|
|
tokens and CSRF tokens are different randomly generated values. Session cookies
|
|
|
|
are "httponly", samesite "strict", and with the path set to the root of the
|
|
|
|
webadmin/webaccount/webmail. Cookies set over HTTPS are marked "secure".
|
|
|
|
Cookies don't have an expiration, they can be extended indefinitely by using
|
|
|
|
them.
|
|
|
|
|
|
|
|
To login, a call to LoginPrep must first be made. It sets a random login token
|
|
|
|
in a cookie, and returns it. The loginToken must be passed to the Login call,
|
|
|
|
along with login credentials. If the loginToken is missing, the login attempt
|
|
|
|
fails before checking any credentials. This should prevent third party websites
|
|
|
|
from tricking a browser into logging in.
|
|
|
|
|
|
|
|
Sessions are stored server-side, and their lifetime automatically extended each
|
|
|
|
time they are used. This makes it easy to invalidate existing sessions after a
|
|
|
|
password change, and keeps the frontend free from handling long-term vs
|
|
|
|
short-term sessions.
|
|
|
|
|
|
|
|
Sessions for the admin interface have a lifetime of 12 hours after last use,
|
|
|
|
are only stored in memory (don't survive a server restart), and only 10
|
|
|
|
sessions can exist at a time (the oldest session is dropped).
|
|
|
|
|
|
|
|
Sessions for the account and mail interfaces have a lifetime of 24 hours after
|
|
|
|
last use, are kept in memory and stored in the database (do survive a server
|
|
|
|
restart), and only 100 sessions can exist per account (the oldest session is
|
|
|
|
dropped).
|
|
|
|
*/
|
|
|
|
package webauth
|
|
|
|
|
|
|
|
import (
|
|
|
|
"context"
|
|
|
|
"encoding/json"
|
|
|
|
"fmt"
|
2024-02-08 16:49:01 +03:00
|
|
|
"log/slog"
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
"net"
|
|
|
|
"net/http"
|
2024-04-12 00:11:31 +03:00
|
|
|
"net/url"
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
"strings"
|
|
|
|
"time"
|
|
|
|
|
|
|
|
"github.com/mjl-/sherpa"
|
|
|
|
|
|
|
|
"github.com/mjl-/mox/metrics"
|
|
|
|
"github.com/mjl-/mox/mlog"
|
|
|
|
"github.com/mjl-/mox/mox-"
|
|
|
|
"github.com/mjl-/mox/store"
|
|
|
|
)
|
|
|
|
|
|
|
|
// Delay before responding in case of bad authentication attempt.
|
|
|
|
var BadAuthDelay = time.Second
|
|
|
|
|
|
|
|
// SessionAuth handles login and session storage, used for both account and
|
|
|
|
// admin authentication.
|
|
|
|
type SessionAuth interface {
|
|
|
|
login(ctx context.Context, log mlog.Log, username, password string) (valid bool, accountName string, rerr error)
|
|
|
|
|
|
|
|
// Add a new session for account and login address.
|
|
|
|
add(ctx context.Context, log mlog.Log, accountName string, loginAddress string) (sessionToken store.SessionToken, csrfToken store.CSRFToken, rerr error)
|
|
|
|
|
|
|
|
// Use an existing session. If csrfToken is empty, no CSRF check must be done.
|
|
|
|
// Otherwise the CSRF token must be associated with the session token, as returned
|
|
|
|
// by add. If the token is not valid (e.g. expired, unknown, malformed), an error
|
|
|
|
// must be returned.
|
|
|
|
use(ctx context.Context, log mlog.Log, accountName string, sessionToken store.SessionToken, csrfToken store.CSRFToken) (loginAddress string, rerr error)
|
|
|
|
|
|
|
|
// Removes a session, invalidating any future use. Must return an error if the
|
|
|
|
// session is not valid.
|
|
|
|
remove(ctx context.Context, log mlog.Log, accountName string, sessionToken store.SessionToken) error
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check authentication for a request based on session token in cookie and matching
|
|
|
|
// csrf in case requireCSRF is set (from header, unless formCSRF is set). Also
|
|
|
|
// performs rate limiting.
|
|
|
|
//
|
|
|
|
// If the returned boolean is true, the request is authenticated. If the returned
|
|
|
|
// boolean is false, an HTTP error response has already been returned. If rate
|
|
|
|
// limiting applies (after too many failed authentication attempts), an HTTP status
|
|
|
|
// 429 is returned. Otherwise, for API requests an error object with either code
|
|
|
|
// "user:noAuth" or "user:badAuth" is returned. Other unauthenticated requests
|
|
|
|
// result in HTTP status 403.
|
|
|
|
//
|
|
|
|
// sessionAuth verifies login attempts and handles session management.
|
|
|
|
//
|
|
|
|
// kind is used for the cookie name (webadmin, webaccount, webmail), and for
|
|
|
|
// logging/metrics.
|
|
|
|
func Check(ctx context.Context, log mlog.Log, sessionAuth SessionAuth, kind string, isForwarded bool, w http.ResponseWriter, r *http.Request, isAPI, requireCSRF, postFormCSRF bool) (accountName string, sessionToken store.SessionToken, loginAddress string, ok bool) {
|
|
|
|
// Respond with an authentication error.
|
|
|
|
respondAuthError := func(code, msg string) {
|
|
|
|
if isAPI {
|
|
|
|
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
|
|
|
var result = struct {
|
|
|
|
Error sherpa.Error `json:"error"`
|
|
|
|
}{
|
|
|
|
sherpa.Error{Code: code, Message: msg},
|
|
|
|
}
|
|
|
|
json.NewEncoder(w).Encode(result)
|
|
|
|
} else {
|
|
|
|
http.Error(w, "403 - forbidden - "+msg, http.StatusForbidden)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// The frontends cannot inject custom headers for all requests, e.g. images loaded
|
|
|
|
// as resources. For those, we don't require the CSRF and rely on the session
|
|
|
|
// cookie with samesite=strict.
|
|
|
|
// todo future: possibly get a session-tied value to use in paths for resources, and verify server-side that it matches the session token.
|
|
|
|
var csrfValue string
|
|
|
|
if requireCSRF && postFormCSRF {
|
|
|
|
csrfValue = r.PostFormValue("csrf")
|
|
|
|
} else {
|
|
|
|
csrfValue = r.Header.Get("x-mox-csrf")
|
|
|
|
}
|
|
|
|
csrfToken := store.CSRFToken(csrfValue)
|
|
|
|
if requireCSRF && csrfToken == "" {
|
|
|
|
respondAuthError("user:noAuth", "missing required csrf header")
|
|
|
|
return "", "", "", false
|
|
|
|
}
|
|
|
|
|
|
|
|
// Cookies are named "webmailsession", "webaccountsession", "webadminsession".
|
|
|
|
cookie, _ := r.Cookie(kind + "session")
|
|
|
|
if cookie == nil {
|
|
|
|
respondAuthError("user:noAuth", "no session")
|
|
|
|
return "", "", "", false
|
|
|
|
}
|
|
|
|
|
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
|
|
|
ip := RemoteIP(log, isForwarded, r)
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
if ip == nil {
|
|
|
|
respondAuthError("user:noAuth", "cannot find ip for rate limit check (missing x-forwarded-for header?)")
|
|
|
|
return "", "", "", false
|
|
|
|
}
|
|
|
|
start := time.Now()
|
|
|
|
if !mox.LimiterFailedAuth.Add(ip, start, 1) {
|
|
|
|
metrics.AuthenticationRatelimitedInc(kind)
|
|
|
|
http.Error(w, "429 - too many auth attempts", http.StatusTooManyRequests)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
authResult := "badcreds"
|
|
|
|
defer func() {
|
|
|
|
metrics.AuthenticationInc(kind, "websession", authResult)
|
|
|
|
}()
|
|
|
|
|
|
|
|
// Cookie values are of the form: token SP accountname.
|
|
|
|
// For admin sessions, the accountname is empty (there is no login address either).
|
|
|
|
t := strings.SplitN(cookie.Value, " ", 2)
|
|
|
|
if len(t) != 2 {
|
|
|
|
time.Sleep(BadAuthDelay)
|
|
|
|
respondAuthError("user:badAuth", "malformed session")
|
|
|
|
return "", "", "", false
|
|
|
|
}
|
|
|
|
sessionToken = store.SessionToken(t[0])
|
|
|
|
|
|
|
|
var err error
|
2024-04-12 00:11:31 +03:00
|
|
|
accountName, err = url.QueryUnescape(t[1])
|
|
|
|
if err != nil {
|
|
|
|
time.Sleep(BadAuthDelay)
|
|
|
|
respondAuthError("user:badAuth", "malformed session account name")
|
|
|
|
return "", "", "", false
|
|
|
|
}
|
|
|
|
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
loginAddress, err = sessionAuth.use(ctx, log, accountName, sessionToken, csrfToken)
|
|
|
|
if err != nil {
|
|
|
|
time.Sleep(BadAuthDelay)
|
|
|
|
respondAuthError("user:badAuth", err.Error())
|
|
|
|
return "", "", "", false
|
|
|
|
}
|
|
|
|
|
|
|
|
mox.LimiterFailedAuth.Reset(ip, start)
|
|
|
|
authResult = "ok"
|
|
|
|
|
|
|
|
// Add to HTTP logging that this is an authenticated request.
|
|
|
|
if lw, ok := w.(interface{ AddAttr(a slog.Attr) }); ok {
|
|
|
|
lw.AddAttr(slog.String("authaccount", accountName))
|
|
|
|
}
|
|
|
|
return accountName, sessionToken, loginAddress, true
|
|
|
|
}
|
|
|
|
|
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
|
|
|
func RemoteIP(log mlog.Log, isForwarded bool, r *http.Request) net.IP {
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
if isForwarded {
|
|
|
|
s := r.Header.Get("X-Forwarded-For")
|
|
|
|
ipstr := strings.TrimSpace(strings.Split(s, ",")[0])
|
|
|
|
return net.ParseIP(ipstr)
|
|
|
|
}
|
|
|
|
|
|
|
|
host, _, _ := net.SplitHostPort(r.RemoteAddr)
|
|
|
|
return net.ParseIP(host)
|
|
|
|
}
|
|
|
|
|
|
|
|
func isHTTPS(isForwarded bool, r *http.Request) bool {
|
|
|
|
if isForwarded {
|
|
|
|
return r.Header.Get("X-Forwarded-Proto") == "https"
|
|
|
|
}
|
|
|
|
return r.TLS != nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// LoginPrep is an API call that returns a loginToken and also sets it as cookie
|
|
|
|
// with the same value. The loginToken must be passed to a subsequent call to
|
|
|
|
// Login, which will check that the loginToken and cookie are both present and
|
|
|
|
// match before checking the actual login attempt. This would prevent a third party
|
|
|
|
// site from triggering login attempts by the browser.
|
|
|
|
func LoginPrep(ctx context.Context, log mlog.Log, kind, cookiePath string, isForwarded bool, w http.ResponseWriter, r *http.Request, token string) {
|
|
|
|
// todo future: we could sign the login token, and verify it on use, so subdomains cannot set it to known values.
|
|
|
|
|
|
|
|
http.SetCookie(w, &http.Cookie{
|
|
|
|
Name: kind + "login",
|
|
|
|
Value: token,
|
|
|
|
Path: cookiePath,
|
|
|
|
Secure: isHTTPS(isForwarded, r),
|
|
|
|
HttpOnly: true,
|
|
|
|
SameSite: http.SameSiteStrictMode,
|
|
|
|
MaxAge: 30, // Only for one login attempt.
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
// Login handles a login attempt, checking against the rate limiter, verifying the
|
|
|
|
// credentials through sessionAuth, and setting a session token cookie on the HTTP
|
|
|
|
// response and returning the associated CSRF token.
|
|
|
|
//
|
|
|
|
// In case of a user error, a *sherpa.Error is returned that sherpa handlers can
|
|
|
|
// pass to panic. For bad credentials, the error code is "user:loginFailed".
|
|
|
|
func Login(ctx context.Context, log mlog.Log, sessionAuth SessionAuth, kind, cookiePath string, isForwarded bool, w http.ResponseWriter, r *http.Request, loginToken, username, password string) (store.CSRFToken, error) {
|
|
|
|
loginCookie, _ := r.Cookie(kind + "login")
|
|
|
|
if loginCookie == nil || loginCookie.Value != loginToken {
|
2024-04-16 17:06:31 +03:00
|
|
|
msg := "missing login token cookie"
|
|
|
|
if isForwarded && loginCookie == nil {
|
|
|
|
msg += " (hint: reverse proxy must keep path, for login cookie)"
|
|
|
|
}
|
|
|
|
return "", &sherpa.Error{Code: "user:error", Message: msg}
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
}
|
|
|
|
|
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
|
|
|
ip := RemoteIP(log, isForwarded, r)
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
if ip == nil {
|
|
|
|
return "", fmt.Errorf("cannot find ip for rate limit check (missing x-forwarded-for header?)")
|
|
|
|
}
|
|
|
|
start := time.Now()
|
|
|
|
if !mox.LimiterFailedAuth.Add(ip, start, 1) {
|
|
|
|
metrics.AuthenticationRatelimitedInc(kind)
|
|
|
|
return "", &sherpa.Error{Code: "user:error", Message: "too many authentication attempts"}
|
|
|
|
}
|
|
|
|
|
|
|
|
valid, accountName, err := sessionAuth.login(ctx, log, username, password)
|
|
|
|
var authResult string
|
|
|
|
defer func() {
|
|
|
|
metrics.AuthenticationInc(kind, "weblogin", authResult)
|
|
|
|
}()
|
|
|
|
if err != nil {
|
|
|
|
authResult = "error"
|
|
|
|
return "", fmt.Errorf("evaluating login attempt: %v", err)
|
|
|
|
} else if !valid {
|
|
|
|
time.Sleep(BadAuthDelay)
|
|
|
|
authResult = "badcreds"
|
|
|
|
return "", &sherpa.Error{Code: "user:loginFailed", Message: "invalid credentials"}
|
|
|
|
}
|
|
|
|
authResult = "ok"
|
|
|
|
mox.LimiterFailedAuth.Reset(ip, start)
|
|
|
|
|
|
|
|
sessionToken, csrfToken, err := sessionAuth.add(ctx, log, accountName, username)
|
|
|
|
if err != nil {
|
|
|
|
log.Errorx("adding session after login", err)
|
|
|
|
return "", fmt.Errorf("adding session: %v", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Add session cookie.
|
|
|
|
http.SetCookie(w, &http.Cookie{
|
2024-04-12 00:22:03 +03:00
|
|
|
Name: kind + "session",
|
2024-04-12 00:11:31 +03:00
|
|
|
// Cookies values are ascii only, so we keep the account name query escaped.
|
|
|
|
Value: string(sessionToken) + " " + url.QueryEscape(accountName),
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
Path: cookiePath,
|
|
|
|
Secure: isHTTPS(isForwarded, r),
|
|
|
|
HttpOnly: true,
|
|
|
|
SameSite: http.SameSiteStrictMode,
|
|
|
|
// We don't set a max-age. These makes cookies per-session. Browsers are rarely
|
|
|
|
// restarted nowadays, and they have "continue where you left off", keeping session
|
|
|
|
// cookies. Our sessions are only valid for max 1 day. Convenience can come from
|
2024-03-09 01:29:15 +03:00
|
|
|
// the browser remembering the password.
|
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
|
|
|
})
|
|
|
|
// Remove cookie used during login.
|
|
|
|
http.SetCookie(w, &http.Cookie{
|
|
|
|
Name: kind + "login",
|
|
|
|
Path: cookiePath,
|
|
|
|
Secure: isHTTPS(isForwarded, r),
|
|
|
|
HttpOnly: true,
|
|
|
|
SameSite: http.SameSiteStrictMode,
|
|
|
|
MaxAge: -1, // Delete cookie
|
|
|
|
})
|
|
|
|
return csrfToken, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Logout removes the session token through sessionAuth, and clears the session
|
|
|
|
// cookie through the HTTP response.
|
|
|
|
func Logout(ctx context.Context, log mlog.Log, sessionAuth SessionAuth, kind, cookiePath string, isForwarded bool, w http.ResponseWriter, r *http.Request, accountName string, sessionToken store.SessionToken) error {
|
|
|
|
err := sessionAuth.remove(ctx, log, accountName, sessionToken)
|
|
|
|
if err != nil {
|
|
|
|
return fmt.Errorf("removing session: %w", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
http.SetCookie(w, &http.Cookie{
|
|
|
|
Name: kind + "session",
|
|
|
|
Path: cookiePath,
|
|
|
|
Secure: isHTTPS(isForwarded, r),
|
|
|
|
HttpOnly: true,
|
|
|
|
SameSite: http.SameSiteStrictMode,
|
|
|
|
MaxAge: -1, // Delete cookie.
|
|
|
|
})
|
|
|
|
return nil
|
|
|
|
}
|