2023-12-31 13:55:22 +03:00
// NOTE: GENERATED by github.com/mjl-/sherpats, DO NOT MODIFY
namespace api {
2024-04-14 18:18:20 +03:00
export interface Account {
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
OutgoingWebhook? : OutgoingWebhook | null
IncomingWebhook? : IncomingWebhook | null
FromIDLoginAddresses? : string [ ] | null
KeepRetiredMessagePeriod : number
KeepRetiredWebhookPeriod : number
2024-04-14 18:18:20 +03:00
Domain : string
Description : string
FullName : string
Destinations ? : { [ key : string ] : Destination }
SubjectPass : SubjectPass
QuotaMessageSize : number
RejectsMailbox : string
KeepRejects : boolean
AutomaticJunkFlags : AutomaticJunkFlags
JunkFilter? : JunkFilter | null // todo: sane defaults for junkfilter
MaxOutgoingMessagesPerDay : number
MaxFirstTimeRecipientsPerDay : number
NoFirstTimeSenderDelay : boolean
Routes? : Route [ ] | null
DNSDomain : Domain // Parsed form of Domain.
2024-04-24 20:15:30 +03:00
Aliases? : AddressAlias [ ] | null
2023-12-31 13:55:22 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
export interface OutgoingWebhook {
URL : string
Authorization : string
Events? : string [ ] | null
}
export interface IncomingWebhook {
URL : string
Authorization : string
}
2023-12-31 13:55:22 +03:00
export interface Destination {
Mailbox : string
Rulesets? : Ruleset [ ] | null
FullName : string
}
export interface Ruleset {
SMTPMailFromRegexp : string
webmail: when moving a single message out of/to the inbox, ask if user wants to create a rule to automatically do that server-side for future deliveries
if the message has a list-id header, we assume this is a (mailing) list
message, and we require a dkim/spf-verified domain (we prefer the shortest that
is a suffix of the list-id value). the rule we would add will mark such
messages as from a mailing list, changing filtering rules on incoming messages
(not enforcing dmarc policies). messages will be matched on list-id header and
will only match if they have the same dkim/spf-verified domain.
if the message doesn't have a list-id header, we'll ask to match based on
"message from" address.
we don't ask the user in several cases:
- if the destination/source mailbox is a special-use mailbox (e.g.
trash,archive,sent,junk; inbox isn't included)
- if the rule already exist (no point in adding it again).
- if the user said "no, not for this list-id/from-address" in the past.
- if the user said "no, not for messages moved to this mailbox" in the past.
we'll add the rule if the message was moved out of the inbox.
if the message was moved to the inbox, we check if there is a matching rule
that we can remove.
we now remember the "no" answers (for list-id, msg-from-addr and mailbox) in
the account database.
to implement the msgfrom rules, this adds support to rulesets for matching on
message "from" address. before, we could match on smtp from address (and other
fields). rulesets now also have a field for comments. webmail adds a note that
it created the rule, with the date.
manual editing of the rulesets is still in the webaccount page. this webmail
functionality is just a convenient way to add/remove common rules.
2024-04-21 18:01:50 +03:00
MsgFromRegexp : string
2023-12-31 13:55:22 +03:00
VerifiedDomain : string
HeadersRegexp ? : { [ key : string ] : string }
IsForward : boolean // todo: once we implement ARC, we can use dkim domains that we cannot verify but that the arc-verified forwarding mail server was able to verify.
ListAllowDomain : string
AcceptRejectsToMailbox : string
Mailbox : string
webmail: when moving a single message out of/to the inbox, ask if user wants to create a rule to automatically do that server-side for future deliveries
if the message has a list-id header, we assume this is a (mailing) list
message, and we require a dkim/spf-verified domain (we prefer the shortest that
is a suffix of the list-id value). the rule we would add will mark such
messages as from a mailing list, changing filtering rules on incoming messages
(not enforcing dmarc policies). messages will be matched on list-id header and
will only match if they have the same dkim/spf-verified domain.
if the message doesn't have a list-id header, we'll ask to match based on
"message from" address.
we don't ask the user in several cases:
- if the destination/source mailbox is a special-use mailbox (e.g.
trash,archive,sent,junk; inbox isn't included)
- if the rule already exist (no point in adding it again).
- if the user said "no, not for this list-id/from-address" in the past.
- if the user said "no, not for messages moved to this mailbox" in the past.
we'll add the rule if the message was moved out of the inbox.
if the message was moved to the inbox, we check if there is a matching rule
that we can remove.
we now remember the "no" answers (for list-id, msg-from-addr and mailbox) in
the account database.
to implement the msgfrom rules, this adds support to rulesets for matching on
message "from" address. before, we could match on smtp from address (and other
fields). rulesets now also have a field for comments. webmail adds a note that
it created the rule, with the date.
manual editing of the rulesets is still in the webaccount page. this webmail
functionality is just a convenient way to add/remove common rules.
2024-04-21 18:01:50 +03:00
Comment : string
2023-12-31 13:55:22 +03:00
VerifiedDNSDomain : Domain
ListAllowDNSDomain : Domain
}
2024-04-14 18:18:20 +03:00
// Domain is a domain name, with one or more labels, with at least an ASCII
// representation, and for IDNA non-ASCII domains a unicode representation.
// The ASCII string must be used for DNS lookups. The strings do not have a
// trailing dot. When using with StrictResolver, add the trailing dot.
export interface Domain {
ASCII : string // A non-unicode domain, e.g. with A-labels (xn--...) or NR-LDH (non-reserved letters/digits/hyphens) labels. Always in lower case. No trailing dot.
Unicode : string // Name as U-labels, in Unicode NFC. Empty if this is an ASCII-only domain. No trailing dot.
}
export interface SubjectPass {
Period : number // todo: have a reasonable default for this?
}
export interface AutomaticJunkFlags {
Enabled : boolean
JunkMailboxRegexp : string
NeutralMailboxRegexp : string
NotJunkMailboxRegexp : string
}
export interface JunkFilter {
Threshold : number
Onegrams : boolean
Twograms : boolean
Threegrams : boolean
MaxPower : number
TopWords : number
IgnoreWords : number
RareWords : number
}
export interface Route {
FromDomain? : string [ ] | null
ToDomain? : string [ ] | null
MinimumAttempts : number
Transport : string
FromDomainASCII? : string [ ] | null
ToDomainASCII? : string [ ] | null
}
2024-04-24 20:15:30 +03:00
export interface AddressAlias {
SubscriptionAddress : string
Alias : Alias // Without members.
MemberAddresses? : string [ ] | null // Only if allowed to see.
}
export interface Alias {
Addresses? : string [ ] | null
PostPublic : boolean
ListMembers : boolean
AllowMsgFrom : boolean
LocalpartStr : string // In encoded form.
Domain : Domain
ParsedAddresses? : AliasAddress [ ] | null // Matches addresses.
}
export interface AliasAddress {
Address : Address // Parsed address.
AccountName : string // Looked up.
Destination : Destination // Belonging to address.
}
// Address is a parsed email address.
export interface Address {
Localpart : Localpart
Domain : Domain // todo: shouldn't we accept an ip address here too? and merge this type into smtp.Path.
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// Suppression is an address to which messages will not be delivered. Attempts to
// deliver or queue will result in an immediate permanent failure to deliver.
export interface Suppression {
ID : number
Created : Date
Account : string // Suppression applies to this account only.
BaseAddress : string // Unicode. Address with fictional simplified localpart: lowercase, dots removed (gmail), first token before any "-" or "+" (typical catchall separator).
OriginalAddress : string // Unicode. Address that caused this suppression.
Manual : boolean
Reason : string
}
2023-12-31 13:55:22 +03:00
// ImportProgress is returned after uploading a file to import.
export interface ImportProgress {
Token : string // For fetching progress, or cancelling an import.
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// Outgoing is the payload sent to webhook URLs for events about outgoing deliveries.
export interface Outgoing {
Version : number // Format of hook, currently 0.
Event : OutgoingEvent // Type of outgoing delivery event.
DSN : boolean // If this event was triggered by a delivery status notification message (DSN).
Suppressing : boolean // If true, this failure caused the address to be added to the suppression list.
QueueMsgID : number // ID of message in queue.
FromID : string // As used in MAIL FROM, can be empty, for incoming messages.
MessageID : string // From Message-Id header, as set by submitter or us, with enclosing <>.
Subject : string // Of original message.
WebhookQueued : Date // When webhook was first queued for delivery.
SMTPCode : number // Optional, for errors only, e.g. 451, 550. See package smtp for definitions.
SMTPEnhancedCode : string // Optional, for errors only, e.g. 5.1.1.
Error : string // Error message while delivering, or from DSN from remote, if any.
Extra ? : { [ key : string ] : string } // Extra fields set for message during submit, through webapi call or through X-Mox-Extra-* headers during SMTP submission.
}
// Incoming is the data sent to a webhook for incoming deliveries over SMTP.
export interface Incoming {
Version : number // Format of hook, currently 0.
From? : NameAddress [ ] | null // Message "From" header, typically has one address.
To? : NameAddress [ ] | null
CC? : NameAddress [ ] | null
BCC? : NameAddress [ ] | null // Often empty, even if you were a BCC recipient.
ReplyTo? : NameAddress [ ] | null // Optional Reply-To header, typically absent or with one address.
Subject : string
MessageID : string // Of Message-Id header, typically of the form "<random@hostname>", includes <>.
InReplyTo : string // Optional, the message-id this message is a reply to. Includes <>.
References? : string [ ] | null // Optional, zero or more message-ids this message is a reply/forward/related to. The last entry is the most recent/immediate message this is a reply to. Earlier entries are the parents in a thread. Values include <>.
Date ? : Date | null // Time in "Date" message header, can be different from time received.
Text : string // Contents of text/plain and/or text/html part (if any), with "\n" line-endings, converted from "\r\n". Values are truncated to 1MB (1024*1024 bytes). Use webapi MessagePartGet to retrieve the full part data.
HTML : string
Structure : Structure // Parsed form of MIME message.
Meta : IncomingMeta // Details about message in storage, and SMTP transaction details.
}
export interface NameAddress {
Name : string // Optional, human-readable "display name" of the addressee.
Address : string // Required, email address.
}
export interface Structure {
ContentType : string // Lower case, e.g. text/plain.
ContentTypeParams ? : { [ key : string ] : string } // Lower case keys, original case values, e.g. {"charset": "UTF-8"}.
ContentID : string // Can be empty. Otherwise, should be a value wrapped in <>'s. For use in HTML, referenced as URI `cid:...`.
DecodedSize : number // Size of content after decoding content-transfer-encoding. For text and HTML parts, this can be larger than the data returned since this size includes \r\n line endings.
Parts? : Structure [ ] | null // Subparts of a multipart message, possibly recursive.
}
export interface IncomingMeta {
MsgID : number // ID of message in storage, and to use in webapi calls like MessageGet.
MailFrom : string // Address used during SMTP "MAIL FROM" command.
MailFromValidated : boolean // Whether SMTP MAIL FROM address was SPF-validated.
MsgFromValidated : boolean // Whether address in message "From"-header was DMARC(-like) validated.
RcptTo : string // SMTP RCPT TO address used in SMTP.
DKIMVerifiedDomains? : string [ ] | null // Verified domains from DKIM-signature in message. Can be different domain than used in addresses.
RemoteIP : string // Where the message was delivered from.
Received : Date // When message was received, may be different from the Date header.
MailboxName : string // Mailbox where message was delivered to, based on configured rules. Defaults to "Inbox".
Automated : boolean // Whether this message was automated and should not receive automated replies. E.g. out of office or mailing list messages.
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
export type CSRFToken = string
2024-04-24 20:15:30 +03:00
// Localpart is a decoded local part of an email address, before the "@".
// For quoted strings, values do not hold the double quote or escaping backslashes.
// An empty string can be a valid localpart.
// Localparts are in Unicode NFC.
export type Localpart = string
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// OutgoingEvent is an activity for an outgoing delivery. Either generated by the
// queue, or through an incoming DSN (delivery status notification) message.
export enum OutgoingEvent {
// Message was accepted by a next-hop server. This does not necessarily mean the
// message has been delivered in the mailbox of the user.
EventDelivered = "delivered" ,
// Outbound delivery was suppressed because the recipient address is on the
// suppression list of the account, or a simplified/base variant of the address is.
EventSuppressed = "suppressed" ,
EventDelayed = "delayed" , // A delivery attempt failed but delivery will be retried again later.
// Delivery of the message failed and will not be tried again. Also see the
// "Suppressing" field of [Outgoing].
EventFailed = "failed" ,
// Message was relayed into a system that does not generate DSNs. Should only
// happen when explicitly requested.
EventRelayed = "relayed" ,
// Message was accepted and is being delivered to multiple recipients (e.g. the
// address was an alias/list), which may generate more DSNs.
EventExpanded = "expanded" ,
EventCanceled = "canceled" , // Message was removed from the queue, e.g. canceled by admin/user.
// An incoming message was received that was either a DSN with an unknown event
// type ("action"), or an incoming non-DSN-message was received for the unique
// per-outgoing-message address used for sending.
EventUnrecognized = "unrecognized" ,
}
2024-04-24 20:15:30 +03:00
export const structTypes : { [ typename : string ] : boolean } = { "Account" : true , "Address" : true , "AddressAlias" : true , "Alias" : true , "AliasAddress" : true , "AutomaticJunkFlags" : true , "Destination" : true , "Domain" : true , "ImportProgress" : true , "Incoming" : true , "IncomingMeta" : true , "IncomingWebhook" : true , "JunkFilter" : true , "NameAddress" : true , "Outgoing" : true , "OutgoingWebhook" : true , "Route" : true , "Ruleset" : true , "Structure" : true , "SubjectPass" : true , "Suppression" : true }
export const stringsTypes : { [ typename : string ] : boolean } = { "CSRFToken" : true , "Localpart" : true , "OutgoingEvent" : true }
2023-12-31 13:55:22 +03:00
export const intsTypes : { [ typename : string ] : boolean } = { }
export const types : TypenameMap = {
2024-04-24 20:15:30 +03:00
"Account" : { "Name" : "Account" , "Docs" : "" , "Fields" : [ { "Name" : "OutgoingWebhook" , "Docs" : "" , "Typewords" : [ "nullable" , "OutgoingWebhook" ] } , { "Name" : "IncomingWebhook" , "Docs" : "" , "Typewords" : [ "nullable" , "IncomingWebhook" ] } , { "Name" : "FromIDLoginAddresses" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } , { "Name" : "KeepRetiredMessagePeriod" , "Docs" : "" , "Typewords" : [ "int64" ] } , { "Name" : "KeepRetiredWebhookPeriod" , "Docs" : "" , "Typewords" : [ "int64" ] } , { "Name" : "Domain" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Description" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "FullName" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Destinations" , "Docs" : "" , "Typewords" : [ "{}" , "Destination" ] } , { "Name" : "SubjectPass" , "Docs" : "" , "Typewords" : [ "SubjectPass" ] } , { "Name" : "QuotaMessageSize" , "Docs" : "" , "Typewords" : [ "int64" ] } , { "Name" : "RejectsMailbox" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "KeepRejects" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "AutomaticJunkFlags" , "Docs" : "" , "Typewords" : [ "AutomaticJunkFlags" ] } , { "Name" : "JunkFilter" , "Docs" : "" , "Typewords" : [ "nullable" , "JunkFilter" ] } , { "Name" : "MaxOutgoingMessagesPerDay" , "Docs" : "" , "Typewords" : [ "int32" ] } , { "Name" : "MaxFirstTimeRecipientsPerDay" , "Docs" : "" , "Typewords" : [ "int32" ] } , { "Name" : "NoFirstTimeSenderDelay" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "Routes" , "Docs" : "" , "Typewords" : [ "[]" , "Route" ] } , { "Name" : "DNSDomain" , "Docs" : "" , "Typewords" : [ "Domain" ] } , { "Name" : "Aliases" , "Docs" : "" , "Typewords" : [ "[]" , "AddressAlias" ] } ] } ,
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"OutgoingWebhook" : { "Name" : "OutgoingWebhook" , "Docs" : "" , "Fields" : [ { "Name" : "URL" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Authorization" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Events" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } ] } ,
"IncomingWebhook" : { "Name" : "IncomingWebhook" , "Docs" : "" , "Fields" : [ { "Name" : "URL" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Authorization" , "Docs" : "" , "Typewords" : [ "string" ] } ] } ,
2023-12-31 13:55:22 +03:00
"Destination" : { "Name" : "Destination" , "Docs" : "" , "Fields" : [ { "Name" : "Mailbox" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Rulesets" , "Docs" : "" , "Typewords" : [ "[]" , "Ruleset" ] } , { "Name" : "FullName" , "Docs" : "" , "Typewords" : [ "string" ] } ] } ,
webmail: when moving a single message out of/to the inbox, ask if user wants to create a rule to automatically do that server-side for future deliveries
if the message has a list-id header, we assume this is a (mailing) list
message, and we require a dkim/spf-verified domain (we prefer the shortest that
is a suffix of the list-id value). the rule we would add will mark such
messages as from a mailing list, changing filtering rules on incoming messages
(not enforcing dmarc policies). messages will be matched on list-id header and
will only match if they have the same dkim/spf-verified domain.
if the message doesn't have a list-id header, we'll ask to match based on
"message from" address.
we don't ask the user in several cases:
- if the destination/source mailbox is a special-use mailbox (e.g.
trash,archive,sent,junk; inbox isn't included)
- if the rule already exist (no point in adding it again).
- if the user said "no, not for this list-id/from-address" in the past.
- if the user said "no, not for messages moved to this mailbox" in the past.
we'll add the rule if the message was moved out of the inbox.
if the message was moved to the inbox, we check if there is a matching rule
that we can remove.
we now remember the "no" answers (for list-id, msg-from-addr and mailbox) in
the account database.
to implement the msgfrom rules, this adds support to rulesets for matching on
message "from" address. before, we could match on smtp from address (and other
fields). rulesets now also have a field for comments. webmail adds a note that
it created the rule, with the date.
manual editing of the rulesets is still in the webaccount page. this webmail
functionality is just a convenient way to add/remove common rules.
2024-04-21 18:01:50 +03:00
"Ruleset" : { "Name" : "Ruleset" , "Docs" : "" , "Fields" : [ { "Name" : "SMTPMailFromRegexp" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "MsgFromRegexp" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "VerifiedDomain" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "HeadersRegexp" , "Docs" : "" , "Typewords" : [ "{}" , "string" ] } , { "Name" : "IsForward" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "ListAllowDomain" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "AcceptRejectsToMailbox" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Mailbox" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Comment" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "VerifiedDNSDomain" , "Docs" : "" , "Typewords" : [ "Domain" ] } , { "Name" : "ListAllowDNSDomain" , "Docs" : "" , "Typewords" : [ "Domain" ] } ] } ,
2024-04-14 18:18:20 +03:00
"Domain" : { "Name" : "Domain" , "Docs" : "" , "Fields" : [ { "Name" : "ASCII" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Unicode" , "Docs" : "" , "Typewords" : [ "string" ] } ] } ,
"SubjectPass" : { "Name" : "SubjectPass" , "Docs" : "" , "Fields" : [ { "Name" : "Period" , "Docs" : "" , "Typewords" : [ "int64" ] } ] } ,
"AutomaticJunkFlags" : { "Name" : "AutomaticJunkFlags" , "Docs" : "" , "Fields" : [ { "Name" : "Enabled" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "JunkMailboxRegexp" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "NeutralMailboxRegexp" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "NotJunkMailboxRegexp" , "Docs" : "" , "Typewords" : [ "string" ] } ] } ,
"JunkFilter" : { "Name" : "JunkFilter" , "Docs" : "" , "Fields" : [ { "Name" : "Threshold" , "Docs" : "" , "Typewords" : [ "float64" ] } , { "Name" : "Onegrams" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "Twograms" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "Threegrams" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "MaxPower" , "Docs" : "" , "Typewords" : [ "float64" ] } , { "Name" : "TopWords" , "Docs" : "" , "Typewords" : [ "int32" ] } , { "Name" : "IgnoreWords" , "Docs" : "" , "Typewords" : [ "float64" ] } , { "Name" : "RareWords" , "Docs" : "" , "Typewords" : [ "int32" ] } ] } ,
"Route" : { "Name" : "Route" , "Docs" : "" , "Fields" : [ { "Name" : "FromDomain" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } , { "Name" : "ToDomain" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } , { "Name" : "MinimumAttempts" , "Docs" : "" , "Typewords" : [ "int32" ] } , { "Name" : "Transport" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "FromDomainASCII" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } , { "Name" : "ToDomainASCII" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } ] } ,
2024-04-24 20:15:30 +03:00
"AddressAlias" : { "Name" : "AddressAlias" , "Docs" : "" , "Fields" : [ { "Name" : "SubscriptionAddress" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Alias" , "Docs" : "" , "Typewords" : [ "Alias" ] } , { "Name" : "MemberAddresses" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } ] } ,
"Alias" : { "Name" : "Alias" , "Docs" : "" , "Fields" : [ { "Name" : "Addresses" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } , { "Name" : "PostPublic" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "ListMembers" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "AllowMsgFrom" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "LocalpartStr" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Domain" , "Docs" : "" , "Typewords" : [ "Domain" ] } , { "Name" : "ParsedAddresses" , "Docs" : "" , "Typewords" : [ "[]" , "AliasAddress" ] } ] } ,
"AliasAddress" : { "Name" : "AliasAddress" , "Docs" : "" , "Fields" : [ { "Name" : "Address" , "Docs" : "" , "Typewords" : [ "Address" ] } , { "Name" : "AccountName" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Destination" , "Docs" : "" , "Typewords" : [ "Destination" ] } ] } ,
"Address" : { "Name" : "Address" , "Docs" : "" , "Fields" : [ { "Name" : "Localpart" , "Docs" : "" , "Typewords" : [ "Localpart" ] } , { "Name" : "Domain" , "Docs" : "" , "Typewords" : [ "Domain" ] } ] } ,
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"Suppression" : { "Name" : "Suppression" , "Docs" : "" , "Fields" : [ { "Name" : "ID" , "Docs" : "" , "Typewords" : [ "int64" ] } , { "Name" : "Created" , "Docs" : "" , "Typewords" : [ "timestamp" ] } , { "Name" : "Account" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "BaseAddress" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "OriginalAddress" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Manual" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "Reason" , "Docs" : "" , "Typewords" : [ "string" ] } ] } ,
2023-12-31 13:55:22 +03:00
"ImportProgress" : { "Name" : "ImportProgress" , "Docs" : "" , "Fields" : [ { "Name" : "Token" , "Docs" : "" , "Typewords" : [ "string" ] } ] } ,
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"Outgoing" : { "Name" : "Outgoing" , "Docs" : "" , "Fields" : [ { "Name" : "Version" , "Docs" : "" , "Typewords" : [ "int32" ] } , { "Name" : "Event" , "Docs" : "" , "Typewords" : [ "OutgoingEvent" ] } , { "Name" : "DSN" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "Suppressing" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "QueueMsgID" , "Docs" : "" , "Typewords" : [ "int64" ] } , { "Name" : "FromID" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "MessageID" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Subject" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "WebhookQueued" , "Docs" : "" , "Typewords" : [ "timestamp" ] } , { "Name" : "SMTPCode" , "Docs" : "" , "Typewords" : [ "int32" ] } , { "Name" : "SMTPEnhancedCode" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Error" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Extra" , "Docs" : "" , "Typewords" : [ "{}" , "string" ] } ] } ,
"Incoming" : { "Name" : "Incoming" , "Docs" : "" , "Fields" : [ { "Name" : "Version" , "Docs" : "" , "Typewords" : [ "int32" ] } , { "Name" : "From" , "Docs" : "" , "Typewords" : [ "[]" , "NameAddress" ] } , { "Name" : "To" , "Docs" : "" , "Typewords" : [ "[]" , "NameAddress" ] } , { "Name" : "CC" , "Docs" : "" , "Typewords" : [ "[]" , "NameAddress" ] } , { "Name" : "BCC" , "Docs" : "" , "Typewords" : [ "[]" , "NameAddress" ] } , { "Name" : "ReplyTo" , "Docs" : "" , "Typewords" : [ "[]" , "NameAddress" ] } , { "Name" : "Subject" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "MessageID" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "InReplyTo" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "References" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } , { "Name" : "Date" , "Docs" : "" , "Typewords" : [ "nullable" , "timestamp" ] } , { "Name" : "Text" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "HTML" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Structure" , "Docs" : "" , "Typewords" : [ "Structure" ] } , { "Name" : "Meta" , "Docs" : "" , "Typewords" : [ "IncomingMeta" ] } ] } ,
"NameAddress" : { "Name" : "NameAddress" , "Docs" : "" , "Fields" : [ { "Name" : "Name" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Address" , "Docs" : "" , "Typewords" : [ "string" ] } ] } ,
"Structure" : { "Name" : "Structure" , "Docs" : "" , "Fields" : [ { "Name" : "ContentType" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "ContentTypeParams" , "Docs" : "" , "Typewords" : [ "{}" , "string" ] } , { "Name" : "ContentID" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "DecodedSize" , "Docs" : "" , "Typewords" : [ "int64" ] } , { "Name" : "Parts" , "Docs" : "" , "Typewords" : [ "[]" , "Structure" ] } ] } ,
"IncomingMeta" : { "Name" : "IncomingMeta" , "Docs" : "" , "Fields" : [ { "Name" : "MsgID" , "Docs" : "" , "Typewords" : [ "int64" ] } , { "Name" : "MailFrom" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "MailFromValidated" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "MsgFromValidated" , "Docs" : "" , "Typewords" : [ "bool" ] } , { "Name" : "RcptTo" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "DKIMVerifiedDomains" , "Docs" : "" , "Typewords" : [ "[]" , "string" ] } , { "Name" : "RemoteIP" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Received" , "Docs" : "" , "Typewords" : [ "timestamp" ] } , { "Name" : "MailboxName" , "Docs" : "" , "Typewords" : [ "string" ] } , { "Name" : "Automated" , "Docs" : "" , "Typewords" : [ "bool" ] } ] } ,
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
"CSRFToken" : { "Name" : "CSRFToken" , "Docs" : "" , "Values" : null } ,
2024-04-24 20:15:30 +03:00
"Localpart" : { "Name" : "Localpart" , "Docs" : "" , "Values" : null } ,
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"OutgoingEvent" : { "Name" : "OutgoingEvent" , "Docs" : "" , "Values" : [ { "Name" : "EventDelivered" , "Value" : "delivered" , "Docs" : "" } , { "Name" : "EventSuppressed" , "Value" : "suppressed" , "Docs" : "" } , { "Name" : "EventDelayed" , "Value" : "delayed" , "Docs" : "" } , { "Name" : "EventFailed" , "Value" : "failed" , "Docs" : "" } , { "Name" : "EventRelayed" , "Value" : "relayed" , "Docs" : "" } , { "Name" : "EventExpanded" , "Value" : "expanded" , "Docs" : "" } , { "Name" : "EventCanceled" , "Value" : "canceled" , "Docs" : "" } , { "Name" : "EventUnrecognized" , "Value" : "unrecognized" , "Docs" : "" } ] } ,
2023-12-31 13:55:22 +03:00
}
export const parser = {
2024-04-14 18:18:20 +03:00
Account : ( v : any ) = > parse ( "Account" , v ) as Account ,
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
OutgoingWebhook : ( v : any ) = > parse ( "OutgoingWebhook" , v ) as OutgoingWebhook ,
IncomingWebhook : ( v : any ) = > parse ( "IncomingWebhook" , v ) as IncomingWebhook ,
2023-12-31 13:55:22 +03:00
Destination : ( v : any ) = > parse ( "Destination" , v ) as Destination ,
Ruleset : ( v : any ) = > parse ( "Ruleset" , v ) as Ruleset ,
2024-04-14 18:18:20 +03:00
Domain : ( v : any ) = > parse ( "Domain" , v ) as Domain ,
SubjectPass : ( v : any ) = > parse ( "SubjectPass" , v ) as SubjectPass ,
AutomaticJunkFlags : ( v : any ) = > parse ( "AutomaticJunkFlags" , v ) as AutomaticJunkFlags ,
JunkFilter : ( v : any ) = > parse ( "JunkFilter" , v ) as JunkFilter ,
Route : ( v : any ) = > parse ( "Route" , v ) as Route ,
2024-04-24 20:15:30 +03:00
AddressAlias : ( v : any ) = > parse ( "AddressAlias" , v ) as AddressAlias ,
Alias : ( v : any ) = > parse ( "Alias" , v ) as Alias ,
AliasAddress : ( v : any ) = > parse ( "AliasAddress" , v ) as AliasAddress ,
Address : ( v : any ) = > parse ( "Address" , v ) as Address ,
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
Suppression : ( v : any ) = > parse ( "Suppression" , v ) as Suppression ,
2023-12-31 13:55:22 +03:00
ImportProgress : ( v : any ) = > parse ( "ImportProgress" , v ) as ImportProgress ,
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
Outgoing : ( v : any ) = > parse ( "Outgoing" , v ) as Outgoing ,
Incoming : ( v : any ) = > parse ( "Incoming" , v ) as Incoming ,
NameAddress : ( v : any ) = > parse ( "NameAddress" , v ) as NameAddress ,
Structure : ( v : any ) = > parse ( "Structure" , v ) as Structure ,
IncomingMeta : ( v : any ) = > parse ( "IncomingMeta" , v ) as IncomingMeta ,
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
CSRFToken : ( v : any ) = > parse ( "CSRFToken" , v ) as CSRFToken ,
2024-04-24 20:15:30 +03:00
Localpart : ( v : any ) = > parse ( "Localpart" , v ) as Localpart ,
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
OutgoingEvent : ( v : any ) = > parse ( "OutgoingEvent" , v ) as OutgoingEvent ,
2023-12-31 13:55:22 +03:00
}
// Account exports web API functions for the account web interface. All its
// methods are exported under api/. Function calls require valid HTTP
// Authentication credentials of a user.
let defaultOptions : ClientOptions = { slicesNullable : true , mapsNullable : true , nullableOptional : true }
export class Client {
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
private baseURL : string
public authState : AuthState
public options : ClientOptions
constructor ( ) {
this . authState = { }
this . options = { . . . defaultOptions }
this . baseURL = this . options . baseURL || defaultBaseURL
}
withAuthToken ( token : string ) : Client {
const c = new Client ( )
c . authState . token = token
c . options = this . options
return c
2023-12-31 13:55:22 +03:00
}
withOptions ( options : ClientOptions ) : Client {
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
const c = new Client ( )
c . authState = this . authState
c . options = { . . . this . options , . . . options }
return c
}
// LoginPrep returns a login token, and also sets it as cookie. Both must be
// present in the call to Login.
async LoginPrep ( ) : Promise < string > {
const fn : string = "LoginPrep"
const paramTypes : string [ ] [ ] = [ ]
const returnTypes : string [ ] [ ] = [ [ "string" ] ]
const params : any [ ] = [ ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as string
}
// Login returns a session token for the credentials, or fails with error code
// "user:badLogin". Call LoginPrep to get a loginToken.
async Login ( loginToken : string , username : string , password : string ) : Promise < CSRFToken > {
const fn : string = "Login"
const paramTypes : string [ ] [ ] = [ [ "string" ] , [ "string" ] , [ "string" ] ]
const returnTypes : string [ ] [ ] = [ [ "CSRFToken" ] ]
const params : any [ ] = [ loginToken , username , password ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as CSRFToken
}
// Logout invalidates the session token.
async Logout ( ) : Promise < void > {
const fn : string = "Logout"
const paramTypes : string [ ] [ ] = [ ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
2023-12-31 13:55:22 +03:00
}
// SetPassword saves a new password for the account, invalidating the previous password.
// Sessions are not interrupted, and will keep working. New login attempts must use the new password.
// Password must be at least 8 characters.
async SetPassword ( password : string ) : Promise < void > {
const fn : string = "SetPassword"
const paramTypes : string [ ] [ ] = [ [ "string" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ password ]
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
2023-12-31 13:55:22 +03:00
}
2024-04-14 18:18:20 +03:00
// Account returns information about the account.
2024-03-11 16:02:35 +03:00
// StorageUsed is the sum of the sizes of all messages, in bytes.
// StorageLimit is the maximum storage that can be used, or 0 if there is no limit.
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
async Account ( ) : Promise < [ Account , number , number , Suppression [ ] | null ] > {
2023-12-31 13:55:22 +03:00
const fn : string = "Account"
const paramTypes : string [ ] [ ] = [ ]
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
const returnTypes : string [ ] [ ] = [ [ "Account" ] , [ "int64" ] , [ "int64" ] , [ "[]" , "Suppression" ] ]
2023-12-31 13:55:22 +03:00
const params : any [ ] = [ ]
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as [ Account , number , number , Suppression [ ] | null ]
2023-12-31 13:55:22 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// AccountSaveFullName saves the full name (used as display name in email messages)
// for the account.
2023-12-31 13:55:22 +03:00
async AccountSaveFullName ( fullName : string ) : Promise < void > {
const fn : string = "AccountSaveFullName"
const paramTypes : string [ ] [ ] = [ [ "string" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ fullName ]
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
2023-12-31 13:55:22 +03:00
}
// DestinationSave updates a destination.
// OldDest is compared against the current destination. If it does not match, an
// error is returned. Otherwise newDest is saved and the configuration reloaded.
async DestinationSave ( destName : string , oldDest : Destination , newDest : Destination ) : Promise < void > {
const fn : string = "DestinationSave"
const paramTypes : string [ ] [ ] = [ [ "string" ] , [ "Destination" ] , [ "Destination" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ destName , oldDest , newDest ]
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
2023-12-31 13:55:22 +03:00
}
// ImportAbort aborts an import that is in progress. If the import exists and isn't
// finished, no changes will have been made by the import.
async ImportAbort ( importToken : string ) : Promise < void > {
const fn : string = "ImportAbort"
const paramTypes : string [ ] [ ] = [ [ "string" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ importToken ]
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
2023-12-31 13:55:22 +03:00
}
// Types exposes types not used in API method signatures, such as the import form upload.
async Types ( ) : Promise < ImportProgress > {
const fn : string = "Types"
const paramTypes : string [ ] [ ] = [ ]
const returnTypes : string [ ] [ ] = [ [ "ImportProgress" ] ]
const params : any [ ] = [ ]
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as ImportProgress
2023-12-31 13:55:22 +03:00
}
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
// SuppressionList lists the addresses on the suppression list of this account.
async SuppressionList ( ) : Promise < Suppression [ ] | null > {
const fn : string = "SuppressionList"
const paramTypes : string [ ] [ ] = [ ]
const returnTypes : string [ ] [ ] = [ [ "[]" , "Suppression" ] ]
const params : any [ ] = [ ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as Suppression [ ] | null
}
// SuppressionAdd adds an email address to the suppression list.
async SuppressionAdd ( address : string , manual : boolean , reason : string ) : Promise < Suppression > {
const fn : string = "SuppressionAdd"
const paramTypes : string [ ] [ ] = [ [ "string" ] , [ "bool" ] , [ "string" ] ]
const returnTypes : string [ ] [ ] = [ [ "Suppression" ] ]
const params : any [ ] = [ address , manual , reason ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as Suppression
}
// SuppressionRemove removes the email address from the suppression list.
async SuppressionRemove ( address : string ) : Promise < void > {
const fn : string = "SuppressionRemove"
const paramTypes : string [ ] [ ] = [ [ "string" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ address ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
}
// OutgoingWebhookSave saves a new webhook url for outgoing deliveries. If url
// is empty, the webhook is disabled. If authorization is non-empty it is used for
// the Authorization header in HTTP requests. Events specifies the outgoing events
// to be delivered, or all if empty/nil.
async OutgoingWebhookSave ( url : string , authorization : string , events : string [ ] | null ) : Promise < void > {
const fn : string = "OutgoingWebhookSave"
const paramTypes : string [ ] [ ] = [ [ "string" ] , [ "string" ] , [ "[]" , "string" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ url , authorization , events ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
}
// OutgoingWebhookTest makes a test webhook call to urlStr, with optional
// authorization. If the HTTP request is made this call will succeed also for
// non-2xx HTTP status codes.
async OutgoingWebhookTest ( urlStr : string , authorization : string , data : Outgoing ) : Promise < [ number , string , string ] > {
const fn : string = "OutgoingWebhookTest"
const paramTypes : string [ ] [ ] = [ [ "string" ] , [ "string" ] , [ "Outgoing" ] ]
const returnTypes : string [ ] [ ] = [ [ "int32" ] , [ "string" ] , [ "string" ] ]
const params : any [ ] = [ urlStr , authorization , data ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as [ number , string , string ]
}
// IncomingWebhookSave saves a new webhook url for incoming deliveries. If url is
// empty, the webhook is disabled. If authorization is not empty, it is used in
// the Authorization header in requests.
async IncomingWebhookSave ( url : string , authorization : string ) : Promise < void > {
const fn : string = "IncomingWebhookSave"
const paramTypes : string [ ] [ ] = [ [ "string" ] , [ "string" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ url , authorization ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
}
// IncomingWebhookTest makes a test webhook HTTP delivery request to urlStr,
// with optional authorization header. If the HTTP call is made, this function
// returns non-error regardless of HTTP status code.
async IncomingWebhookTest ( urlStr : string , authorization : string , data : Incoming ) : Promise < [ number , string , string ] > {
const fn : string = "IncomingWebhookTest"
const paramTypes : string [ ] [ ] = [ [ "string" ] , [ "string" ] , [ "Incoming" ] ]
const returnTypes : string [ ] [ ] = [ [ "int32" ] , [ "string" ] , [ "string" ] ]
const params : any [ ] = [ urlStr , authorization , data ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as [ number , string , string ]
}
// FromIDLoginAddressesSave saves new login addresses to enable unique SMTP
// MAIL FROM addresses ("fromid") for deliveries from the queue.
async FromIDLoginAddressesSave ( loginAddresses : string [ ] | null ) : Promise < void > {
const fn : string = "FromIDLoginAddressesSave"
const paramTypes : string [ ] [ ] = [ [ "[]" , "string" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ loginAddresses ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
}
2024-04-17 22:30:54 +03:00
// KeepRetiredPeriodsSave saves periods to save retired messages and webhooks.
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.
this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.
unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...
matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.
a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.
messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.
suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.
submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".
to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.
admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.
new config options have been introduced. they are editable through the admin
and/or account web interfaces.
the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.
gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.
for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
async KeepRetiredPeriodsSave ( keepRetiredMessagePeriod : number , keepRetiredWebhookPeriod : number ) : Promise < void > {
const fn : string = "KeepRetiredPeriodsSave"
const paramTypes : string [ ] [ ] = [ [ "int64" ] , [ "int64" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ keepRetiredMessagePeriod , keepRetiredWebhookPeriod ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
}
2024-04-17 22:30:54 +03:00
// AutomaticJunkFlagsSave saves settings for automatically marking messages as
// junk/nonjunk when moved to mailboxes matching certain regular expressions.
async AutomaticJunkFlagsSave ( enabled : boolean , junkRegexp : string , neutralRegexp : string , notJunkRegexp : string ) : Promise < void > {
const fn : string = "AutomaticJunkFlagsSave"
const paramTypes : string [ ] [ ] = [ [ "bool" ] , [ "string" ] , [ "string" ] , [ "string" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ enabled , junkRegexp , neutralRegexp , notJunkRegexp ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
}
2024-05-09 23:43:14 +03:00
// JunkFilterSave saves junk filter settings. If junkFilter is nil, the junk filter
// is disabled. Otherwise all fields except Threegrams are stored.
async JunkFilterSave ( junkFilter : JunkFilter | null ) : Promise < void > {
const fn : string = "JunkFilterSave"
const paramTypes : string [ ] [ ] = [ [ "nullable" , "JunkFilter" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ junkFilter ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
}
2024-04-17 22:30:54 +03:00
// RejectsSave saves the RejectsMailbox and KeepRejects settings.
async RejectsSave ( mailbox : string , keep : boolean ) : Promise < void > {
const fn : string = "RejectsSave"
const paramTypes : string [ ] [ ] = [ [ "string" ] , [ "bool" ] ]
const returnTypes : string [ ] [ ] = [ ]
const params : any [ ] = [ mailbox , keep ]
return await _sherpaCall ( this . baseURL , this . authState , { . . . this . options } , paramTypes , returnTypes , fn , params ) as void
}
2023-12-31 13:55:22 +03:00
}
export const defaultBaseURL = ( function ( ) {
let p = location . pathname
if ( p && p [ p . length - 1 ] !== '/' ) {
let l = location . pathname . split ( '/' )
l = l . slice ( 0 , l . length - 1 )
p = '/' + l . join ( '/' ) + '/'
}
return location . protocol + '//' + location . host + p + 'api/'
} ) ( )
// NOTE: code below is shared between github.com/mjl-/sherpaweb and github.com/mjl-/sherpats.
// KEEP IN SYNC.
export const supportedSherpaVersion = 1
export interface Section {
Name : string
Docs : string
Functions : Function [ ]
Sections : Section [ ]
Structs : Struct [ ]
Ints : Ints [ ]
Strings : Strings [ ]
Version : string // only for top-level section
SherpaVersion : number // only for top-level section
SherpadocVersion : number // only for top-level section
}
export interface Function {
Name : string
Docs : string
Params : Arg [ ]
Returns : Arg [ ]
}
export interface Arg {
Name : string
Typewords : string [ ]
}
export interface Struct {
Name : string
Docs : string
Fields : Field [ ]
}
export interface Field {
Name : string
Docs : string
Typewords : string [ ]
}
export interface Ints {
Name : string
Docs : string
Values : {
Name : string
Value : number
Docs : string
} [ ] | null
}
export interface Strings {
Name : string
Docs : string
Values : {
Name : string
Value : string
Docs : string
} [ ] | null
}
export type NamedType = Struct | Strings | Ints
export type TypenameMap = { [ k : string ] : NamedType }
// verifyArg typechecks "v" against "typewords", returning a new (possibly modified) value for JSON-encoding.
// toJS indicate if the data is coming into JS. If so, timestamps are turned into JS Dates. Otherwise, JS Dates are turned into strings.
// allowUnknownKeys configures whether unknown keys in structs are allowed.
// types are the named types of the API.
export const verifyArg = ( path : string , v : any , typewords : string [ ] , toJS : boolean , allowUnknownKeys : boolean , types : TypenameMap , opts : ClientOptions ) : any = > {
return new verifier ( types , toJS , allowUnknownKeys , opts ) . verify ( path , v , typewords )
}
export const parse = ( name : string , v : any ) : any = > verifyArg ( name , v , [ name ] , true , false , types , defaultOptions )
class verifier {
constructor ( private types : TypenameMap , private toJS : boolean , private allowUnknownKeys : boolean , private opts : ClientOptions ) {
}
verify ( path : string , v : any , typewords : string [ ] ) : any {
typewords = typewords . slice ( 0 )
const ww = typewords . shift ( )
const error = ( msg : string ) = > {
if ( path != '' ) {
msg = path + ': ' + msg
}
throw new Error ( msg )
}
if ( typeof ww !== 'string' ) {
error ( 'bad typewords' )
return // should not be necessary, typescript doesn't see error always throws an exception?
}
const w : string = ww
const ensure = ( ok : boolean , expect : string ) : any = > {
if ( ! ok ) {
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
error ( 'got ' + JSON . stringify ( v ) + ', expected ' + expect )
2023-12-31 13:55:22 +03:00
}
return v
}
switch ( w ) {
case 'nullable' :
if ( v === null || v === undefined && this . opts . nullableOptional ) {
return v
}
return this . verify ( path , v , typewords )
case '[]' :
if ( v === null && this . opts . slicesNullable || v === undefined && this . opts . slicesNullable && this . opts . nullableOptional ) {
return v
}
ensure ( Array . isArray ( v ) , "array" )
return v . map ( ( e : any , i : number ) = > this . verify ( path + '[' + i + ']' , e , typewords ) )
case '{}' :
if ( v === null && this . opts . mapsNullable || v === undefined && this . opts . mapsNullable && this . opts . nullableOptional ) {
return v
}
ensure ( v !== null || typeof v === 'object' , "object" )
const r : any = { }
for ( const k in v ) {
r [ k ] = this . verify ( path + '.' + k , v [ k ] , typewords )
}
return r
}
ensure ( typewords . length == 0 , "empty typewords" )
const t = typeof v
switch ( w ) {
case 'any' :
return v
case 'bool' :
ensure ( t === 'boolean' , 'bool' )
return v
case 'int8' :
case 'uint8' :
case 'int16' :
case 'uint16' :
case 'int32' :
case 'uint32' :
case 'int64' :
case 'uint64' :
ensure ( t === 'number' && Number . isInteger ( v ) , 'integer' )
return v
case 'float32' :
case 'float64' :
ensure ( t === 'number' , 'float' )
return v
case 'int64s' :
case 'uint64s' :
ensure ( t === 'number' && Number . isInteger ( v ) || t === 'string' , 'integer fitting in float without precision loss, or string' )
return '' + v
case 'string' :
ensure ( t === 'string' , 'string' )
return v
case 'timestamp' :
if ( this . toJS ) {
ensure ( t === 'string' , 'string, with timestamp' )
const d = new Date ( v )
if ( d instanceof Date && ! isNaN ( d . getTime ( ) ) ) {
return d
}
error ( 'invalid date ' + v )
} else {
ensure ( t === 'object' && v !== null , 'non-null object' )
ensure ( v . __proto__ === Date . prototype , 'Date' )
return v . toISOString ( )
}
}
// We're left with named types.
const nt = this . types [ w ]
if ( ! nt ) {
error ( 'unknown type ' + w )
}
if ( v === null ) {
error ( 'bad value ' + v + ' for named type ' + w )
}
if ( structTypes [ nt . Name ] ) {
const t = nt as Struct
if ( typeof v !== 'object' ) {
error ( 'bad value ' + v + ' for struct ' + w )
}
const r : any = { }
for ( const f of t . Fields ) {
r [ f . Name ] = this . verify ( path + '.' + f . Name , v [ f . Name ] , f . Typewords )
}
// If going to JSON also verify no unknown fields are present.
if ( ! this . allowUnknownKeys ) {
const known : { [ key : string ] : boolean } = { }
for ( const f of t . Fields ) {
known [ f . Name ] = true
}
Object . keys ( v ) . forEach ( ( k ) = > {
if ( ! known [ k ] ) {
error ( 'unknown key ' + k + ' for struct ' + w )
}
} )
}
return r
} else if ( stringsTypes [ nt . Name ] ) {
const t = nt as Strings
if ( typeof v !== 'string' ) {
error ( 'mistyped value ' + v + ' for named strings ' + t . Name )
}
if ( ! t . Values || t . Values . length === 0 ) {
return v
}
for ( const sv of t . Values ) {
if ( sv . Value === v ) {
return v
}
}
2024-03-09 17:43:49 +03:00
error ( 'unknown value ' + v + ' for named strings ' + t . Name )
2023-12-31 13:55:22 +03:00
} else if ( intsTypes [ nt . Name ] ) {
const t = nt as Ints
if ( typeof v !== 'number' || ! Number . isInteger ( v ) ) {
error ( 'mistyped value ' + v + ' for named ints ' + t . Name )
}
if ( ! t . Values || t . Values . length === 0 ) {
return v
}
for ( const sv of t . Values ) {
if ( sv . Value === v ) {
return v
}
}
2024-03-09 17:43:49 +03:00
error ( 'unknown value ' + v + ' for named ints ' + t . Name )
2023-12-31 13:55:22 +03:00
} else {
throw new Error ( 'unexpected named type ' + nt )
}
}
}
export interface ClientOptions {
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
baseURL? : string
2023-12-31 13:55:22 +03:00
aborter ? : { abort ? : ( ) = > void }
timeoutMsec? : number
skipParamCheck? : boolean
skipReturnCheck? : boolean
slicesNullable? : boolean
mapsNullable? : boolean
nullableOptional? : boolean
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
csrfHeader? : string
login ? : ( reason : string ) = > Promise < string >
2023-12-31 13:55:22 +03:00
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
export interface AuthState {
token? : string // For csrf request header.
loginPromise? : Promise < void > // To let multiple API calls wait for a single login attempt, not each opening a login popup.
}
const _sherpaCall = async ( baseURL : string , authState : AuthState , options : ClientOptions , paramTypes : string [ ] [ ] , returnTypes : string [ ] [ ] , name : string , params : any [ ] ) : Promise < any > = > {
2023-12-31 13:55:22 +03:00
if ( ! options . skipParamCheck ) {
if ( params . length !== paramTypes . length ) {
return Promise . reject ( { message : 'wrong number of parameters in sherpa call, saw ' + params . length + ' != expected ' + paramTypes . length } )
}
params = params . map ( ( v : any , index : number ) = > verifyArg ( 'params[' + index + ']' , v , paramTypes [ index ] , false , false , types , options ) )
}
const simulate = async ( json : string ) = > {
const config = JSON . parse ( json || 'null' ) || { }
const waitMinMsec = config . waitMinMsec || 0
const waitMaxMsec = config . waitMaxMsec || 0
const wait = Math . random ( ) * ( waitMaxMsec - waitMinMsec )
const failRate = config . failRate || 0
return new Promise < void > ( ( resolve , reject ) = > {
if ( options . aborter ) {
options . aborter . abort = ( ) = > {
reject ( { message : 'call to ' + name + ' aborted by user' , code : 'sherpa:aborted' } )
reject = resolve = ( ) = > { }
}
}
setTimeout ( ( ) = > {
const r = Math . random ( )
if ( r < failRate ) {
reject ( { message : 'injected failure on ' + name , code : 'server:injected' } )
} else {
resolve ( )
}
reject = resolve = ( ) = > { }
} , waitMinMsec + wait )
} )
}
// Only simulate when there is a debug string. Otherwise it would always interfere
// with setting options.aborter.
let json : string = ''
try {
json = window . localStorage . getItem ( 'sherpats-debug' ) || ''
} catch ( err ) { }
if ( json ) {
await simulate ( json )
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
const fn = ( resolve : ( v : any ) = > void , reject : ( v : any ) = > void ) = > {
let resolve1 = ( v : any ) = > {
2023-12-31 13:55:22 +03:00
resolve ( v )
resolve1 = ( ) = > { }
reject1 = ( ) = > { }
}
let reject1 = ( v : { code : string , message : string } ) = > {
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
if ( ( v . code === 'user:noAuth' || v . code === 'user:badAuth' ) && options . login ) {
const login = options . login
if ( ! authState . loginPromise ) {
authState . loginPromise = new Promise ( ( aresolve , areject ) = > {
login ( v . code === 'user:badAuth' ? ( v . message || '' ) : '' )
. then ( ( token ) = > {
authState . token = token
authState . loginPromise = undefined
aresolve ( )
} , ( err : any ) = > {
authState . loginPromise = undefined
areject ( err )
} )
} )
}
authState . loginPromise
. then ( ( ) = > {
fn ( resolve , reject )
} , ( err : any ) = > {
reject ( err )
} )
return
}
2023-12-31 13:55:22 +03:00
reject ( v )
resolve1 = ( ) = > { }
reject1 = ( ) = > { }
}
const url = baseURL + name
const req = new window . XMLHttpRequest ( )
if ( options . aborter ) {
options . aborter . abort = ( ) = > {
req . abort ( )
reject1 ( { code : 'sherpa:aborted' , message : 'request aborted' } )
}
}
req . open ( 'POST' , url , true )
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
if ( options . csrfHeader && authState . token ) {
req . setRequestHeader ( options . csrfHeader , authState . token )
}
2023-12-31 13:55:22 +03:00
if ( options . timeoutMsec ) {
req . timeout = options . timeoutMsec
}
req . onload = ( ) = > {
if ( req . status !== 200 ) {
if ( req . status === 404 ) {
reject1 ( { code : 'sherpa:badFunction' , message : 'function does not exist' } )
} else {
reject1 ( { code : 'sherpa:http' , message : 'error calling function, HTTP status: ' + req . status } )
}
return
}
let resp : any
try {
resp = JSON . parse ( req . responseText )
} catch ( err ) {
reject1 ( { code : 'sherpa:badResponse' , message : 'bad JSON from server' } )
return
}
if ( resp && resp . error ) {
const err = resp . error
reject1 ( { code : err.code , message : err.message } )
return
} else if ( ! resp || ! resp . hasOwnProperty ( 'result' ) ) {
reject1 ( { code : 'sherpa:badResponse' , message : "invalid sherpa response object, missing 'result'" } )
return
}
if ( options . skipReturnCheck ) {
resolve1 ( resp . result )
return
}
let result = resp . result
try {
if ( returnTypes . length === 0 ) {
if ( result ) {
throw new Error ( 'function ' + name + ' returned a value while prototype says it returns "void"' )
}
} else if ( returnTypes . length === 1 ) {
result = verifyArg ( 'result' , result , returnTypes [ 0 ] , true , true , types , options )
} else {
if ( result . length != returnTypes . length ) {
throw new Error ( 'wrong number of values returned by ' + name + ', saw ' + result . length + ' != expected ' + returnTypes . length )
}
result = result . map ( ( v : any , index : number ) = > verifyArg ( 'result[' + index + ']' , v , returnTypes [ index ] , true , true , types , options ) )
}
} catch ( err ) {
let errmsg = 'bad types'
if ( err instanceof Error ) {
errmsg = err . message
}
reject1 ( { code : 'sherpa:badTypes' , message : errmsg } )
}
resolve1 ( result )
}
req . onerror = ( ) = > {
reject1 ( { code : 'sherpa:connection' , message : 'connection failed' } )
}
req . ontimeout = ( ) = > {
reject1 ( { code : 'sherpa:timeout' , message : 'request timeout' } )
}
req . setRequestHeader ( 'Content-Type' , 'application/json' )
try {
req . send ( JSON . stringify ( { params : params } ) )
} catch ( err ) {
reject1 ( { code : 'sherpa:badData' , message : 'cannot marshal to JSON' } )
}
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:
there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.
a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.
another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.
our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.
api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).
in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions. for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.
webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.
for issue #58
2024-01-04 15:10:48 +03:00
}
return await new Promise ( fn )
2023-12-31 13:55:22 +03:00
}
}