mox/webaccount/api.json

1732 lines
33 KiB
JSON
Raw Normal View History

2023-01-30 16:27:06 +03:00
{
"Name": "Account",
"Docs": "Account exports web API functions for the account web interface. All its\nmethods are exported under api/. Function calls require valid HTTP\nAuthentication credentials of a user.",
2023-01-30 16:27:06 +03:00
"Functions": [
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
{
"Name": "LoginPrep",
"Docs": "LoginPrep returns a login token, and also sets it as cookie. Both must be\npresent in the call to Login.",
"Params": [],
"Returns": [
{
"Name": "r0",
"Typewords": [
"string"
]
}
]
},
{
"Name": "Login",
"Docs": "Login returns a session token for the credentials, or fails with error code\n\"user:badLogin\". Call LoginPrep to get a loginToken.",
"Params": [
{
"Name": "loginToken",
"Typewords": [
"string"
]
},
{
"Name": "username",
"Typewords": [
"string"
]
},
{
"Name": "password",
"Typewords": [
"string"
]
}
],
"Returns": [
{
"Name": "r0",
"Typewords": [
"CSRFToken"
]
}
]
},
{
"Name": "Logout",
"Docs": "Logout invalidates the session token.",
"Params": [],
"Returns": []
},
2023-01-30 16:27:06 +03:00
{
"Name": "SetPassword",
"Docs": "SetPassword saves a new password for the account, invalidating the previous password.\nSessions are not interrupted, and will keep working. New login attempts must use the new password.\nPassword must be at least 8 characters.",
"Params": [
{
"Name": "password",
"Typewords": [
"string"
]
}
],
"Returns": []
},
{
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
"Name": "Account",
"Docs": "Account returns information about the account.\nStorageUsed is the sum of the sizes of all messages, in bytes.\nStorageLimit is the maximum storage that can be used, or 0 if there is no limit.",
"Params": [],
"Returns": [
{
"Name": "account",
"Typewords": [
"Account"
]
2024-03-11 16:02:35 +03:00
},
{
"Name": "storageUsed",
"Typewords": [
"int64"
]
},
{
"Name": "storageLimit",
"Typewords": [
"int64"
]
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
},
{
"Name": "suppressions",
"Typewords": [
"[]",
"Suppression"
]
}
]
},
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
{
"Name": "AccountSaveFullName",
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"Docs": "AccountSaveFullName saves the full name (used as display name in email messages)\nfor the account.",
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
"Params": [
{
"Name": "fullName",
"Typewords": [
"string"
]
}
],
"Returns": []
},
{
"Name": "DestinationSave",
"Docs": "DestinationSave updates a destination.\nOldDest is compared against the current destination. If it does not match, an\nerror is returned. Otherwise newDest is saved and the configuration reloaded.",
"Params": [
{
"Name": "destName",
"Typewords": [
"string"
]
},
{
"Name": "oldDest",
"Typewords": [
"Destination"
]
},
{
"Name": "newDest",
"Typewords": [
"Destination"
]
}
],
"Returns": []
},
{
"Name": "ImportAbort",
"Docs": "ImportAbort aborts an import that is in progress. If the import exists and isn't\nfinished, no changes will have been made by the import.",
"Params": [
{
"Name": "importToken",
"Typewords": [
"string"
]
}
],
"Returns": []
},
{
"Name": "Types",
"Docs": "Types exposes types not used in API method signatures, such as the import form upload.",
"Params": [],
"Returns": [
{
"Name": "importProgress",
"Typewords": [
"ImportProgress"
]
}
]
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
},
{
"Name": "SuppressionList",
"Docs": "SuppressionList lists the addresses on the suppression list of this account.",
"Params": [],
"Returns": [
{
"Name": "suppressions",
"Typewords": [
"[]",
"Suppression"
]
}
]
},
{
"Name": "SuppressionAdd",
"Docs": "SuppressionAdd adds an email address to the suppression list.",
"Params": [
{
"Name": "address",
"Typewords": [
"string"
]
},
{
"Name": "manual",
"Typewords": [
"bool"
]
},
{
"Name": "reason",
"Typewords": [
"string"
]
}
],
"Returns": [
{
"Name": "suppression",
"Typewords": [
"Suppression"
]
}
]
},
{
"Name": "SuppressionRemove",
"Docs": "SuppressionRemove removes the email address from the suppression list.",
"Params": [
{
"Name": "address",
"Typewords": [
"string"
]
}
],
"Returns": []
},
{
"Name": "OutgoingWebhookSave",
"Docs": "OutgoingWebhookSave saves a new webhook url for outgoing deliveries. If url\nis empty, the webhook is disabled. If authorization is non-empty it is used for\nthe Authorization header in HTTP requests. Events specifies the outgoing events\nto be delivered, or all if empty/nil.",
"Params": [
{
"Name": "url",
"Typewords": [
"string"
]
},
{
"Name": "authorization",
"Typewords": [
"string"
]
},
{
"Name": "events",
"Typewords": [
"[]",
"string"
]
}
],
"Returns": []
},
{
"Name": "OutgoingWebhookTest",
"Docs": "OutgoingWebhookTest makes a test webhook call to urlStr, with optional\nauthorization. If the HTTP request is made this call will succeed also for\nnon-2xx HTTP status codes.",
"Params": [
{
"Name": "urlStr",
"Typewords": [
"string"
]
},
{
"Name": "authorization",
"Typewords": [
"string"
]
},
{
"Name": "data",
"Typewords": [
"Outgoing"
]
}
],
"Returns": [
{
"Name": "code",
"Typewords": [
"int32"
]
},
{
"Name": "response",
"Typewords": [
"string"
]
},
{
"Name": "errmsg",
"Typewords": [
"string"
]
}
]
},
{
"Name": "IncomingWebhookSave",
"Docs": "IncomingWebhookSave saves a new webhook url for incoming deliveries. If url is\nempty, the webhook is disabled. If authorization is not empty, it is used in\nthe Authorization header in requests.",
"Params": [
{
"Name": "url",
"Typewords": [
"string"
]
},
{
"Name": "authorization",
"Typewords": [
"string"
]
}
],
"Returns": []
},
{
"Name": "IncomingWebhookTest",
"Docs": "IncomingWebhookTest makes a test webhook HTTP delivery request to urlStr,\nwith optional authorization header. If the HTTP call is made, this function\nreturns non-error regardless of HTTP status code.",
"Params": [
{
"Name": "urlStr",
"Typewords": [
"string"
]
},
{
"Name": "authorization",
"Typewords": [
"string"
]
},
{
"Name": "data",
"Typewords": [
"Incoming"
]
}
],
"Returns": [
{
"Name": "code",
"Typewords": [
"int32"
]
},
{
"Name": "response",
"Typewords": [
"string"
]
},
{
"Name": "errmsg",
"Typewords": [
"string"
]
}
]
},
{
"Name": "FromIDLoginAddressesSave",
"Docs": "FromIDLoginAddressesSave saves new login addresses to enable unique SMTP\nMAIL FROM addresses (\"fromid\") for deliveries from the queue.",
"Params": [
{
"Name": "loginAddresses",
"Typewords": [
"[]",
"string"
]
}
],
"Returns": []
},
{
"Name": "KeepRetiredPeriodsSave",
"Docs": "KeepRetiredPeriodsSave saves periods to save retired messages and webhooks.",
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"Params": [
{
"Name": "keepRetiredMessagePeriod",
"Typewords": [
"int64"
]
},
{
"Name": "keepRetiredWebhookPeriod",
"Typewords": [
"int64"
]
}
],
"Returns": []
},
{
"Name": "AutomaticJunkFlagsSave",
"Docs": "AutomaticJunkFlagsSave saves settings for automatically marking messages as\njunk/nonjunk when moved to mailboxes matching certain regular expressions.",
"Params": [
{
"Name": "enabled",
"Typewords": [
"bool"
]
},
{
"Name": "junkRegexp",
"Typewords": [
"string"
]
},
{
"Name": "neutralRegexp",
"Typewords": [
"string"
]
},
{
"Name": "notJunkRegexp",
"Typewords": [
"string"
]
}
],
"Returns": []
},
{
"Name": "JunkFilterSave",
"Docs": "JunkFilterSave saves junk filter settings. If junkFilter is nil, the junk filter\nis disabled. Otherwise all fields except Threegrams are stored.",
"Params": [
{
"Name": "junkFilter",
"Typewords": [
"nullable",
"JunkFilter"
]
}
],
"Returns": []
},
{
"Name": "RejectsSave",
"Docs": "RejectsSave saves the RejectsMailbox and KeepRejects settings.",
"Params": [
{
"Name": "mailbox",
"Typewords": [
"string"
]
},
{
"Name": "keep",
"Typewords": [
"bool"
]
}
],
"Returns": []
implement tls client certificate authentication the imap & smtp servers now allow logging in with tls client authentication and the "external" sasl authentication mechanism. email clients like thunderbird, fairemail, k9, macos mail implement it. this seems to be the most secure among the authentication mechanism commonly implemented by clients. a useful property is that an account can have a separate tls public key for each device/email client. with tls client cert auth, authentication is also bound to the tls connection. a mitm cannot pass the credentials on to another tls connection, similar to scram-*-plus. though part of scram-*-plus is that clients verify that the server knows the client credentials. for tls client auth with imap, we send a "preauth" untagged message by default. that puts the connection in authenticated state. given the imap connection state machine, further authentication commands are not allowed. some clients don't recognize the preauth message, and try to authenticate anyway, which fails. a tls public key has a config option to disable preauth, keeping new connections in unauthenticated state, to work with such email clients. for smtp (submission), we don't require an explicit auth command. both for imap and smtp, we allow a client to authenticate with another mechanism than "external". in that case, credentials are verified, and have to be for the same account as the tls client auth, but the adress can be another one than the login address configured with the tls public key. only the public key is used to identify the account that is authenticating. we ignore the rest of the certificate. expiration dates, names, constraints, etc are not verified. no certificate authorities are involved. users can upload their own (minimal) certificate. the account web interface shows openssl commands you can run to generate a private key, minimal cert, and a p12 file (the format that email clients seem to like...) containing both private key and certificate. the imapclient & smtpclient packages can now also use tls client auth. and so does "mox sendmail", either with a pem file with private key and certificate, or with just an ed25519 private key. there are new subcommands "mox config tlspubkey ..." for adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
},
{
"Name": "TLSPublicKeys",
"Docs": "",
"Params": [],
"Returns": [
{
"Name": "r0",
"Typewords": [
"[]",
"TLSPublicKey"
]
}
]
},
{
"Name": "TLSPublicKeyAdd",
"Docs": "",
"Params": [
{
"Name": "loginAddress",
"Typewords": [
"string"
]
},
{
"Name": "name",
"Typewords": [
"string"
]
},
{
"Name": "noIMAPPreauth",
"Typewords": [
"bool"
]
},
{
"Name": "certPEM",
"Typewords": [
"string"
]
}
],
"Returns": [
{
"Name": "r0",
"Typewords": [
"TLSPublicKey"
]
}
]
},
{
"Name": "TLSPublicKeyRemove",
"Docs": "",
"Params": [
{
"Name": "fingerprint",
"Typewords": [
"string"
]
}
],
"Returns": []
},
{
"Name": "TLSPublicKeyUpdate",
"Docs": "",
"Params": [
{
"Name": "pubKey",
"Typewords": [
"TLSPublicKey"
]
}
],
"Returns": []
2023-01-30 16:27:06 +03:00
}
],
"Sections": [],
"Structs": [
{
"Name": "Account",
"Docs": "",
"Fields": [
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
{
"Name": "OutgoingWebhook",
"Docs": "",
"Typewords": [
"nullable",
"OutgoingWebhook"
]
},
{
"Name": "IncomingWebhook",
"Docs": "",
"Typewords": [
"nullable",
"IncomingWebhook"
]
},
{
"Name": "FromIDLoginAddresses",
"Docs": "",
"Typewords": [
"[]",
"string"
]
},
{
"Name": "KeepRetiredMessagePeriod",
"Docs": "",
"Typewords": [
"int64"
]
},
{
"Name": "KeepRetiredWebhookPeriod",
"Docs": "",
"Typewords": [
"int64"
]
},
{
"Name": "Domain",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Description",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "FullName",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Destinations",
"Docs": "",
"Typewords": [
"{}",
"Destination"
]
},
{
"Name": "SubjectPass",
"Docs": "",
"Typewords": [
"SubjectPass"
]
},
{
"Name": "QuotaMessageSize",
"Docs": "",
"Typewords": [
"int64"
]
},
{
"Name": "RejectsMailbox",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "KeepRejects",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "AutomaticJunkFlags",
"Docs": "",
"Typewords": [
"AutomaticJunkFlags"
]
},
{
"Name": "JunkFilter",
"Docs": "todo: sane defaults for junkfilter",
"Typewords": [
"nullable",
"JunkFilter"
]
},
{
"Name": "MaxOutgoingMessagesPerDay",
"Docs": "",
"Typewords": [
"int32"
]
},
{
"Name": "MaxFirstTimeRecipientsPerDay",
"Docs": "",
"Typewords": [
"int32"
]
},
{
"Name": "NoFirstTimeSenderDelay",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "Routes",
"Docs": "",
"Typewords": [
"[]",
"Route"
]
},
{
"Name": "DNSDomain",
"Docs": "Parsed form of Domain.",
"Typewords": [
"Domain"
]
add aliases/lists: when sending to an alias, the message gets delivered to all members the members must currently all be addresses of local accounts. a message sent to an alias is accepted if at least one of the members accepts it. if no members accepts it (e.g. due to bad reputation of sender), the message is rejected. if a message is submitted to both an alias addresses and to recipients that are members of the alias in an smtp transaction, the message will be delivered to such members only once. the same applies if the address in the message from-header is the address of a member: that member won't receive the message (they sent it). this prevents duplicate messages. aliases have three configuration options: - PostPublic: whether anyone can send through the alias, or only members. members-only lists can be useful inside organizations for internal communication. public lists can be useful for support addresses. - ListMembers: whether members can see the addresses of other members. this can be seen in the account web interface. in the future, we could export this in other ways, so clients can expand the list. - AllowMsgFrom: whether messages can be sent through the alias with the alias address used in the message from-header. the webmail knows it can use that address, and will use it as from-address when replying to a message sent to that address. ideas for the future: - allow external addresses as members. still with some restrictions, such as requiring a valid dkim-signature so delivery has a chance to succeed. will also need configuration of an admin that can receive any bounces. - allow specifying specific members who can sent through the list (instead of all members). for github issue #57 by hmfaysal. also relevant for #99 by naturalethic. thanks to damir & marin from sartura for discussing requirements/features.
2024-04-24 20:15:30 +03:00
},
{
"Name": "Aliases",
"Docs": "",
"Typewords": [
"[]",
"AddressAlias"
]
}
]
},
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
{
"Name": "OutgoingWebhook",
"Docs": "",
"Fields": [
{
"Name": "URL",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Authorization",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Events",
"Docs": "",
"Typewords": [
"[]",
"string"
]
}
]
},
{
"Name": "IncomingWebhook",
"Docs": "",
"Fields": [
{
"Name": "URL",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Authorization",
"Docs": "",
"Typewords": [
"string"
]
}
]
},
{
"Name": "Destination",
"Docs": "",
"Fields": [
{
"Name": "Mailbox",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Rulesets",
"Docs": "",
"Typewords": [
"[]",
"Ruleset"
]
add webmail it was far down on the roadmap, but implemented earlier, because it's interesting, and to help prepare for a jmap implementation. for jmap we need to implement more client-like functionality than with just imap. internal data structures need to change. jmap has lots of other requirements, so it's already a big project. by implementing a webmail now, some of the required data structure changes become clear and can be made now, so the later jmap implementation can do things similarly to the webmail code. the webmail frontend and webmail are written together, making their interface/api much smaller and simpler than jmap. one of the internal changes is that we now keep track of per-mailbox total/unread/unseen/deleted message counts and mailbox sizes. keeping this data consistent after any change to the stored messages (through the code base) is tricky, so mox now has a consistency check that verifies the counts are correct, which runs only during tests, each time an internal account reference is closed. we have a few more internal "changes" that are propagated for the webmail frontend (that imap doesn't have a way to propagate on a connection), like changes to the special-use flags on mailboxes, and used keywords in a mailbox. more changes that will be required have revealed themselves while implementing the webmail, and will be implemented next. the webmail user interface is modeled after the mail clients i use or have used: thunderbird, macos mail, mutt; and webmails i normally only use for testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed, but still the goal is to make this webmail client easy to use for everyone. the user interface looks like most other mail clients: a list of mailboxes, a search bar, a message list view, and message details. there is a top/bottom and a left/right layout for the list/message view, default is automatic based on screen size. the panes can be resized by the user. buttons for actions are just text, not icons. clicking a button briefly shows the shortcut for the action in the bottom right, helping with learning to operate quickly. any text that is underdotted has a title attribute that causes more information to be displayed, e.g. what a button does or a field is about. to highlight potential phishing attempts, any text (anywhere in the webclient) that switches unicode "blocks" (a rough approximation to (language) scripts) within a word is underlined orange. multiple messages can be selected with familiar ui interaction: clicking while holding control and/or shift keys. keyboard navigation works with arrows/page up/down and home/end keys, and also with a few basic vi-like keys for list/message navigation. we prefer showing the text instead of html (with inlined images only) version of a message. html messages are shown in an iframe served from an endpoint with CSP headers to prevent dangerous resources (scripts, external images) from being loaded. the html is also sanitized, with javascript removed. a user can choose to load external resources (e.g. images for tracking purposes). the frontend is just (strict) typescript, no external frameworks. all incoming/outgoing data is typechecked, both the api request parameters and response types, and the data coming in over SSE. the types and checking code are generated with sherpats, which uses the api definitions generated by sherpadoc based on the Go code. so types from the backend are automatically propagated to the frontend. since there is no framework to automatically propagate properties and rerender components, changes coming in over the SSE connection are propagated explicitly with regular function calls. the ui is separated into "views", each with a "root" dom element that is added to the visible document. these views have additional functions for getting changes propagated, often resulting in the view updating its (internal) ui state (dom). we keep the frontend compilation simple, it's just a few typescript files that get compiled (combined and types stripped) into a single js file, no additional runtime code needed or complicated build processes used. the webmail is served is served from a compressed, cachable html file that includes style and the javascript, currently just over 225kb uncompressed, under 60kb compressed (not minified, including comments). we include the generated js files in the repository, to keep Go's easily buildable self-contained binaries. authentication is basic http, as with the account and admin pages. most data comes in over one long-term SSE connection to the backend. api requests signal which mailbox/search/messages are requested over the SSE connection. fetching individual messages, and making changes, are done through api calls. the operations are similar to imap, so some code has been moved from package imapserver to package store. the future jmap implementation will benefit from these changes too. more functionality will probably be moved to the store package in the future. the quickstart enables webmail on the internal listener by default (for new installs). users can enable it on the public listener if they want to. mox localserve enables it too. to enable webmail on existing installs, add settings like the following to the listeners in mox.conf, similar to AccountHTTP(S): WebmailHTTP: Enabled: true WebmailHTTPS: Enabled: true special thanks to liesbeth, gerben, andrii for early user feedback. there is plenty still to do, see the list at the top of webmail/webmail.ts. feedback welcome as always.
2023-08-07 22:57:03 +03:00
},
{
"Name": "FullName",
"Docs": "",
"Typewords": [
"string"
]
}
]
},
{
"Name": "Ruleset",
"Docs": "",
"Fields": [
{
"Name": "SMTPMailFromRegexp",
"Docs": "",
"Typewords": [
"string"
]
},
webmail: when moving a single message out of/to the inbox, ask if user wants to create a rule to automatically do that server-side for future deliveries if the message has a list-id header, we assume this is a (mailing) list message, and we require a dkim/spf-verified domain (we prefer the shortest that is a suffix of the list-id value). the rule we would add will mark such messages as from a mailing list, changing filtering rules on incoming messages (not enforcing dmarc policies). messages will be matched on list-id header and will only match if they have the same dkim/spf-verified domain. if the message doesn't have a list-id header, we'll ask to match based on "message from" address. we don't ask the user in several cases: - if the destination/source mailbox is a special-use mailbox (e.g. trash,archive,sent,junk; inbox isn't included) - if the rule already exist (no point in adding it again). - if the user said "no, not for this list-id/from-address" in the past. - if the user said "no, not for messages moved to this mailbox" in the past. we'll add the rule if the message was moved out of the inbox. if the message was moved to the inbox, we check if there is a matching rule that we can remove. we now remember the "no" answers (for list-id, msg-from-addr and mailbox) in the account database. to implement the msgfrom rules, this adds support to rulesets for matching on message "from" address. before, we could match on smtp from address (and other fields). rulesets now also have a field for comments. webmail adds a note that it created the rule, with the date. manual editing of the rulesets is still in the webaccount page. this webmail functionality is just a convenient way to add/remove common rules.
2024-04-21 18:01:50 +03:00
{
"Name": "MsgFromRegexp",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "VerifiedDomain",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "HeadersRegexp",
"Docs": "",
"Typewords": [
"{}",
"string"
]
},
{
"Name": "IsForward",
"Docs": "todo: once we implement ARC, we can use dkim domains that we cannot verify but that the arc-verified forwarding mail server was able to verify.",
"Typewords": [
"bool"
]
},
{
"Name": "ListAllowDomain",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "AcceptRejectsToMailbox",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Mailbox",
"Docs": "",
"Typewords": [
"string"
]
},
webmail: when moving a single message out of/to the inbox, ask if user wants to create a rule to automatically do that server-side for future deliveries if the message has a list-id header, we assume this is a (mailing) list message, and we require a dkim/spf-verified domain (we prefer the shortest that is a suffix of the list-id value). the rule we would add will mark such messages as from a mailing list, changing filtering rules on incoming messages (not enforcing dmarc policies). messages will be matched on list-id header and will only match if they have the same dkim/spf-verified domain. if the message doesn't have a list-id header, we'll ask to match based on "message from" address. we don't ask the user in several cases: - if the destination/source mailbox is a special-use mailbox (e.g. trash,archive,sent,junk; inbox isn't included) - if the rule already exist (no point in adding it again). - if the user said "no, not for this list-id/from-address" in the past. - if the user said "no, not for messages moved to this mailbox" in the past. we'll add the rule if the message was moved out of the inbox. if the message was moved to the inbox, we check if there is a matching rule that we can remove. we now remember the "no" answers (for list-id, msg-from-addr and mailbox) in the account database. to implement the msgfrom rules, this adds support to rulesets for matching on message "from" address. before, we could match on smtp from address (and other fields). rulesets now also have a field for comments. webmail adds a note that it created the rule, with the date. manual editing of the rulesets is still in the webaccount page. this webmail functionality is just a convenient way to add/remove common rules.
2024-04-21 18:01:50 +03:00
{
"Name": "Comment",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "VerifiedDNSDomain",
"Docs": "",
"Typewords": [
"Domain"
]
},
{
"Name": "ListAllowDNSDomain",
"Docs": "",
"Typewords": [
"Domain"
]
}
]
},
{
"Name": "Domain",
"Docs": "Domain is a domain name, with one or more labels, with at least an ASCII\nrepresentation, and for IDNA non-ASCII domains a unicode representation.\nThe ASCII string must be used for DNS lookups. The strings do not have a\ntrailing dot. When using with StrictResolver, add the trailing dot.",
"Fields": [
{
"Name": "ASCII",
"Docs": "A non-unicode domain, e.g. with A-labels (xn--...) or NR-LDH (non-reserved letters/digits/hyphens) labels. Always in lower case. No trailing dot.",
"Typewords": [
"string"
]
},
{
"Name": "Unicode",
"Docs": "Name as U-labels, in Unicode NFC. Empty if this is an ASCII-only domain. No trailing dot.",
"Typewords": [
"string"
]
}
]
},
{
"Name": "SubjectPass",
"Docs": "",
"Fields": [
{
"Name": "Period",
"Docs": "todo: have a reasonable default for this?",
"Typewords": [
"int64"
]
}
]
},
{
"Name": "AutomaticJunkFlags",
"Docs": "",
"Fields": [
{
"Name": "Enabled",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "JunkMailboxRegexp",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "NeutralMailboxRegexp",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "NotJunkMailboxRegexp",
"Docs": "",
"Typewords": [
"string"
]
}
]
},
{
"Name": "JunkFilter",
"Docs": "",
"Fields": [
{
"Name": "Threshold",
"Docs": "",
"Typewords": [
"float64"
]
},
{
"Name": "Onegrams",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "Twograms",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "Threegrams",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "MaxPower",
"Docs": "",
"Typewords": [
"float64"
]
},
{
"Name": "TopWords",
"Docs": "",
"Typewords": [
"int32"
]
},
{
"Name": "IgnoreWords",
"Docs": "",
"Typewords": [
"float64"
]
},
{
"Name": "RareWords",
"Docs": "",
"Typewords": [
"int32"
]
}
]
},
{
"Name": "Route",
"Docs": "",
"Fields": [
{
"Name": "FromDomain",
"Docs": "",
"Typewords": [
"[]",
"string"
]
},
{
"Name": "ToDomain",
"Docs": "",
"Typewords": [
"[]",
"string"
]
},
{
"Name": "MinimumAttempts",
"Docs": "",
"Typewords": [
"int32"
]
},
{
"Name": "Transport",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "FromDomainASCII",
"Docs": "",
"Typewords": [
"[]",
"string"
]
},
{
"Name": "ToDomainASCII",
"Docs": "",
"Typewords": [
"[]",
"string"
]
}
]
},
add aliases/lists: when sending to an alias, the message gets delivered to all members the members must currently all be addresses of local accounts. a message sent to an alias is accepted if at least one of the members accepts it. if no members accepts it (e.g. due to bad reputation of sender), the message is rejected. if a message is submitted to both an alias addresses and to recipients that are members of the alias in an smtp transaction, the message will be delivered to such members only once. the same applies if the address in the message from-header is the address of a member: that member won't receive the message (they sent it). this prevents duplicate messages. aliases have three configuration options: - PostPublic: whether anyone can send through the alias, or only members. members-only lists can be useful inside organizations for internal communication. public lists can be useful for support addresses. - ListMembers: whether members can see the addresses of other members. this can be seen in the account web interface. in the future, we could export this in other ways, so clients can expand the list. - AllowMsgFrom: whether messages can be sent through the alias with the alias address used in the message from-header. the webmail knows it can use that address, and will use it as from-address when replying to a message sent to that address. ideas for the future: - allow external addresses as members. still with some restrictions, such as requiring a valid dkim-signature so delivery has a chance to succeed. will also need configuration of an admin that can receive any bounces. - allow specifying specific members who can sent through the list (instead of all members). for github issue #57 by hmfaysal. also relevant for #99 by naturalethic. thanks to damir & marin from sartura for discussing requirements/features.
2024-04-24 20:15:30 +03:00
{
"Name": "AddressAlias",
"Docs": "",
"Fields": [
{
"Name": "SubscriptionAddress",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Alias",
"Docs": "Without members.",
"Typewords": [
"Alias"
]
},
{
"Name": "MemberAddresses",
"Docs": "Only if allowed to see.",
"Typewords": [
"[]",
"string"
]
}
]
},
{
"Name": "Alias",
"Docs": "",
"Fields": [
{
"Name": "Addresses",
"Docs": "",
"Typewords": [
"[]",
"string"
]
},
{
"Name": "PostPublic",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "ListMembers",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "AllowMsgFrom",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "LocalpartStr",
"Docs": "In encoded form.",
"Typewords": [
"string"
]
},
{
"Name": "Domain",
"Docs": "",
"Typewords": [
"Domain"
]
},
{
"Name": "ParsedAddresses",
"Docs": "Matches addresses.",
"Typewords": [
"[]",
"AliasAddress"
]
}
]
},
{
"Name": "AliasAddress",
"Docs": "",
"Fields": [
{
"Name": "Address",
"Docs": "Parsed address.",
"Typewords": [
"Address"
]
},
{
"Name": "AccountName",
"Docs": "Looked up.",
"Typewords": [
"string"
]
},
{
"Name": "Destination",
"Docs": "Belonging to address.",
"Typewords": [
"Destination"
]
}
]
},
{
"Name": "Address",
"Docs": "Address is a parsed email address.",
"Fields": [
{
"Name": "Localpart",
"Docs": "",
"Typewords": [
"Localpart"
]
},
{
"Name": "Domain",
"Docs": "todo: shouldn't we accept an ip address here too? and merge this type into smtp.Path.",
"Typewords": [
"Domain"
]
}
]
},
{
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
"Name": "Suppression",
"Docs": "Suppression is an address to which messages will not be delivered. Attempts to\ndeliver or queue will result in an immediate permanent failure to deliver.",
"Fields": [
{
"Name": "ID",
"Docs": "",
"Typewords": [
"int64"
]
},
{
"Name": "Created",
"Docs": "",
"Typewords": [
"timestamp"
]
},
{
"Name": "Account",
"Docs": "Suppression applies to this account only.",
"Typewords": [
"string"
]
},
{
"Name": "BaseAddress",
"Docs": "Unicode. Address with fictional simplified localpart: lowercase, dots removed (gmail), first token before any \"-\" or \"+\" (typical catchall separator).",
"Typewords": [
"string"
]
},
{
"Name": "OriginalAddress",
"Docs": "Unicode. Address that caused this suppression.",
"Typewords": [
"string"
]
},
{
"Name": "Manual",
"Docs": "",
"Typewords": [
"bool"
]
},
{
"Name": "Reason",
"Docs": "",
"Typewords": [
"string"
]
}
]
},
{
"Name": "ImportProgress",
"Docs": "ImportProgress is returned after uploading a file to import.",
"Fields": [
{
"Name": "Token",
"Docs": "For fetching progress, or cancelling an import.",
"Typewords": [
"string"
]
}
]
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
},
{
"Name": "Outgoing",
"Docs": "Outgoing is the payload sent to webhook URLs for events about outgoing deliveries.",
"Fields": [
{
"Name": "Version",
"Docs": "Format of hook, currently 0.",
"Typewords": [
"int32"
]
},
{
"Name": "Event",
"Docs": "Type of outgoing delivery event.",
"Typewords": [
"OutgoingEvent"
]
},
{
"Name": "DSN",
"Docs": "If this event was triggered by a delivery status notification message (DSN).",
"Typewords": [
"bool"
]
},
{
"Name": "Suppressing",
"Docs": "If true, this failure caused the address to be added to the suppression list.",
"Typewords": [
"bool"
]
},
{
"Name": "QueueMsgID",
"Docs": "ID of message in queue.",
"Typewords": [
"int64"
]
},
{
"Name": "FromID",
"Docs": "As used in MAIL FROM, can be empty, for incoming messages.",
"Typewords": [
"string"
]
},
{
"Name": "MessageID",
"Docs": "From Message-Id header, as set by submitter or us, with enclosing \u003c\u003e.",
"Typewords": [
"string"
]
},
{
"Name": "Subject",
"Docs": "Of original message.",
"Typewords": [
"string"
]
},
{
"Name": "WebhookQueued",
"Docs": "When webhook was first queued for delivery.",
"Typewords": [
"timestamp"
]
},
{
"Name": "SMTPCode",
"Docs": "Optional, for errors only, e.g. 451, 550. See package smtp for definitions.",
"Typewords": [
"int32"
]
},
{
"Name": "SMTPEnhancedCode",
"Docs": "Optional, for errors only, e.g. 5.1.1.",
"Typewords": [
"string"
]
},
{
"Name": "Error",
"Docs": "Error message while delivering, or from DSN from remote, if any.",
"Typewords": [
"string"
]
},
{
"Name": "Extra",
"Docs": "Extra fields set for message during submit, through webapi call or through X-Mox-Extra-* headers during SMTP submission.",
"Typewords": [
"{}",
"string"
]
}
]
},
{
"Name": "Incoming",
"Docs": "Incoming is the data sent to a webhook for incoming deliveries over SMTP.",
"Fields": [
{
"Name": "Version",
"Docs": "Format of hook, currently 0.",
"Typewords": [
"int32"
]
},
{
"Name": "From",
"Docs": "Message \"From\" header, typically has one address.",
"Typewords": [
"[]",
"NameAddress"
]
},
{
"Name": "To",
"Docs": "",
"Typewords": [
"[]",
"NameAddress"
]
},
{
"Name": "CC",
"Docs": "",
"Typewords": [
"[]",
"NameAddress"
]
},
{
"Name": "BCC",
"Docs": "Often empty, even if you were a BCC recipient.",
"Typewords": [
"[]",
"NameAddress"
]
},
{
"Name": "ReplyTo",
"Docs": "Optional Reply-To header, typically absent or with one address.",
"Typewords": [
"[]",
"NameAddress"
]
},
{
"Name": "Subject",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "MessageID",
"Docs": "Of Message-Id header, typically of the form \"\u003crandom@hostname\u003e\", includes \u003c\u003e.",
"Typewords": [
"string"
]
},
{
"Name": "InReplyTo",
"Docs": "Optional, the message-id this message is a reply to. Includes \u003c\u003e.",
"Typewords": [
"string"
]
},
{
"Name": "References",
"Docs": "Optional, zero or more message-ids this message is a reply/forward/related to. The last entry is the most recent/immediate message this is a reply to. Earlier entries are the parents in a thread. Values include \u003c\u003e.",
"Typewords": [
"[]",
"string"
]
},
{
"Name": "Date",
"Docs": "Time in \"Date\" message header, can be different from time received.",
"Typewords": [
"nullable",
"timestamp"
]
},
{
"Name": "Text",
"Docs": "Contents of text/plain and/or text/html part (if any), with \"\\n\" line-endings, converted from \"\\r\\n\". Values are truncated to 1MB (1024*1024 bytes). Use webapi MessagePartGet to retrieve the full part data.",
"Typewords": [
"string"
]
},
{
"Name": "HTML",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Structure",
"Docs": "Parsed form of MIME message.",
"Typewords": [
"Structure"
]
},
{
"Name": "Meta",
"Docs": "Details about message in storage, and SMTP transaction details.",
"Typewords": [
"IncomingMeta"
]
}
]
},
{
"Name": "NameAddress",
"Docs": "",
"Fields": [
{
"Name": "Name",
"Docs": "Optional, human-readable \"display name\" of the addressee.",
"Typewords": [
"string"
]
},
{
"Name": "Address",
"Docs": "Required, email address.",
"Typewords": [
"string"
]
}
]
},
{
"Name": "Structure",
"Docs": "",
"Fields": [
{
"Name": "ContentType",
"Docs": "Lower case, e.g. text/plain.",
"Typewords": [
"string"
]
},
{
"Name": "ContentTypeParams",
"Docs": "Lower case keys, original case values, e.g. {\"charset\": \"UTF-8\"}.",
"Typewords": [
"{}",
"string"
]
},
{
"Name": "ContentID",
"Docs": "Can be empty. Otherwise, should be a value wrapped in \u003c\u003e's. For use in HTML, referenced as URI `cid:...`.",
"Typewords": [
"string"
]
},
{
"Name": "ContentDisposition",
"Docs": "Lower-case value, e.g. \"attachment\", \"inline\" or empty when absent. Without the key/value header parameters.",
"Typewords": [
"string"
]
},
{
"Name": "Filename",
"Docs": "Filename for this part, based on \"filename\" parameter from Content-Disposition, or \"name\" from Content-Type after decoding.",
"Typewords": [
"string"
]
},
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
{
"Name": "DecodedSize",
"Docs": "Size of content after decoding content-transfer-encoding. For text and HTML parts, this can be larger than the data returned since this size includes \\r\\n line endings.",
"Typewords": [
"int64"
]
},
{
"Name": "Parts",
"Docs": "Subparts of a multipart message, possibly recursive.",
"Typewords": [
"[]",
"Structure"
]
}
]
},
{
"Name": "IncomingMeta",
"Docs": "",
"Fields": [
{
"Name": "MsgID",
"Docs": "ID of message in storage, and to use in webapi calls like MessageGet.",
"Typewords": [
"int64"
]
},
{
"Name": "MailFrom",
"Docs": "Address used during SMTP \"MAIL FROM\" command.",
"Typewords": [
"string"
]
},
{
"Name": "MailFromValidated",
"Docs": "Whether SMTP MAIL FROM address was SPF-validated.",
"Typewords": [
"bool"
]
},
{
"Name": "MsgFromValidated",
"Docs": "Whether address in message \"From\"-header was DMARC(-like) validated.",
"Typewords": [
"bool"
]
},
{
"Name": "RcptTo",
"Docs": "SMTP RCPT TO address used in SMTP.",
"Typewords": [
"string"
]
},
{
"Name": "DKIMVerifiedDomains",
"Docs": "Verified domains from DKIM-signature in message. Can be different domain than used in addresses.",
"Typewords": [
"[]",
"string"
]
},
{
"Name": "RemoteIP",
"Docs": "Where the message was delivered from.",
"Typewords": [
"string"
]
},
{
"Name": "Received",
"Docs": "When message was received, may be different from the Date header.",
"Typewords": [
"timestamp"
]
},
{
"Name": "MailboxName",
"Docs": "Mailbox where message was delivered to, based on configured rules. Defaults to \"Inbox\".",
"Typewords": [
"string"
]
},
{
"Name": "Automated",
"Docs": "Whether this message was automated and should not receive automated replies. E.g. out of office or mailing list messages.",
"Typewords": [
"bool"
]
}
]
implement tls client certificate authentication the imap & smtp servers now allow logging in with tls client authentication and the "external" sasl authentication mechanism. email clients like thunderbird, fairemail, k9, macos mail implement it. this seems to be the most secure among the authentication mechanism commonly implemented by clients. a useful property is that an account can have a separate tls public key for each device/email client. with tls client cert auth, authentication is also bound to the tls connection. a mitm cannot pass the credentials on to another tls connection, similar to scram-*-plus. though part of scram-*-plus is that clients verify that the server knows the client credentials. for tls client auth with imap, we send a "preauth" untagged message by default. that puts the connection in authenticated state. given the imap connection state machine, further authentication commands are not allowed. some clients don't recognize the preauth message, and try to authenticate anyway, which fails. a tls public key has a config option to disable preauth, keeping new connections in unauthenticated state, to work with such email clients. for smtp (submission), we don't require an explicit auth command. both for imap and smtp, we allow a client to authenticate with another mechanism than "external". in that case, credentials are verified, and have to be for the same account as the tls client auth, but the adress can be another one than the login address configured with the tls public key. only the public key is used to identify the account that is authenticating. we ignore the rest of the certificate. expiration dates, names, constraints, etc are not verified. no certificate authorities are involved. users can upload their own (minimal) certificate. the account web interface shows openssl commands you can run to generate a private key, minimal cert, and a p12 file (the format that email clients seem to like...) containing both private key and certificate. the imapclient & smtpclient packages can now also use tls client auth. and so does "mox sendmail", either with a pem file with private key and certificate, or with just an ed25519 private key. there are new subcommands "mox config tlspubkey ..." for adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 00:41:49 +03:00
},
{
"Name": "TLSPublicKey",
"Docs": "TLSPublicKey is a public key for use with TLS client authentication based on the\npublic key of the certificate.",
"Fields": [
{
"Name": "Fingerprint",
"Docs": "Raw-url-base64-encoded Subject Public Key Info of certificate.",
"Typewords": [
"string"
]
},
{
"Name": "Created",
"Docs": "",
"Typewords": [
"timestamp"
]
},
{
"Name": "Type",
"Docs": "E.g. \"rsa-2048\", \"ecdsa-p256\", \"ed25519\"",
"Typewords": [
"string"
]
},
{
"Name": "Name",
"Docs": "Descriptive name to identify the key, e.g. the device where key is used.",
"Typewords": [
"string"
]
},
{
"Name": "NoIMAPPreauth",
"Docs": "If set, new immediate authenticated TLS connections are not moved to \"authenticated\" state. For clients that don't understand it, and will try an authenticate command anyway.",
"Typewords": [
"bool"
]
},
{
"Name": "CertDER",
"Docs": "",
"Typewords": [
"[]",
"uint8"
]
},
{
"Name": "Account",
"Docs": "Key authenticates this account.",
"Typewords": [
"string"
]
},
{
"Name": "LoginAddress",
"Docs": "Must belong to account.",
"Typewords": [
"string"
]
}
]
}
],
2023-01-30 16:27:06 +03:00
"Ints": [],
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
"Strings": [
{
"Name": "CSRFToken",
"Docs": "",
"Values": null
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
},
add aliases/lists: when sending to an alias, the message gets delivered to all members the members must currently all be addresses of local accounts. a message sent to an alias is accepted if at least one of the members accepts it. if no members accepts it (e.g. due to bad reputation of sender), the message is rejected. if a message is submitted to both an alias addresses and to recipients that are members of the alias in an smtp transaction, the message will be delivered to such members only once. the same applies if the address in the message from-header is the address of a member: that member won't receive the message (they sent it). this prevents duplicate messages. aliases have three configuration options: - PostPublic: whether anyone can send through the alias, or only members. members-only lists can be useful inside organizations for internal communication. public lists can be useful for support addresses. - ListMembers: whether members can see the addresses of other members. this can be seen in the account web interface. in the future, we could export this in other ways, so clients can expand the list. - AllowMsgFrom: whether messages can be sent through the alias with the alias address used in the message from-header. the webmail knows it can use that address, and will use it as from-address when replying to a message sent to that address. ideas for the future: - allow external addresses as members. still with some restrictions, such as requiring a valid dkim-signature so delivery has a chance to succeed. will also need configuration of an admin that can receive any bounces. - allow specifying specific members who can sent through the list (instead of all members). for github issue #57 by hmfaysal. also relevant for #99 by naturalethic. thanks to damir & marin from sartura for discussing requirements/features.
2024-04-24 20:15:30 +03:00
{
"Name": "Localpart",
"Docs": "Localpart is a decoded local part of an email address, before the \"@\".\nFor quoted strings, values do not hold the double quote or escaping backslashes.\nAn empty string can be a valid localpart.\nLocalparts are in Unicode NFC.",
"Values": null
},
add a webapi and webhooks for a simple http/json-based api for applications to compose/send messages, receive delivery feedback, and maintain suppression lists. this is an alternative to applications using a library to compose messages, submitting those messages using smtp, and monitoring a mailbox with imap for DSNs, which can be processed into the equivalent of suppression lists. but you need to know about all these standards/protocols and find libraries. by using the webapi & webhooks, you just need a http & json library. unfortunately, there is no standard for these kinds of api, so mox has made up yet another one... matching incoming DSNs about deliveries to original outgoing messages requires keeping history of "retired" messages (delivered from the queue, either successfully or failed). this can be enabled per account. history is also useful for debugging deliveries. we now also keep history of each delivery attempt, accessible while still in the queue, and kept when a message is retired. the queue webadmin pages now also have pagination, to show potentially large history. a queue of webhook calls is now managed too. failures are retried similar to message deliveries. webhooks can also be saved to the retired list after completing. also configurable per account. messages can be sent with a "unique smtp mail from" address. this can only be used if the domain is configured with a localpart catchall separator such as "+". when enabled, a queued message gets assigned a random "fromid", which is added after the separator when sending. when DSNs are returned, they can be related to previously sent messages based on this fromid. in the future, we can implement matching on the "envid" used in the smtp dsn extension, or on the "message-id" of the message. using a fromid can be triggered by authenticating with a login email address that is configured as enabling fromid. suppression lists are automatically managed per account. if a delivery attempt results in certain smtp errors, the destination address is added to the suppression list. future messages queued for that recipient will immediately fail without a delivery attempt. suppression lists protect your mail server reputation. submitted messages can carry "extra" data through the queue and webhooks for outgoing deliveries. through webapi as a json object, through smtp submission as message headers of the form "x-mox-extra-<key>: value". to make it easy to test webapi/webhooks locally, the "localserve" mode actually puts messages in the queue. when it's time to deliver, it still won't do a full delivery attempt, but just delivers to the sender account. unless the recipient address has a special form, simulating a failure to deliver. admins now have more control over the queue. "hold rules" can be added to mark newly queued messages as "on hold", pausing delivery. rules can be about certain sender or recipient domains/addresses, or apply to all messages pausing the entire queue. also useful for (local) testing. new config options have been introduced. they are editable through the admin and/or account web interfaces. the webapi http endpoints are enabled for newly generated configs with the quickstart, and in localserve. existing configurations must explicitly enable the webapi in mox.conf. gopherwatch.org was created to dogfood this code. it initially used just the compose/smtpclient/imapclient mox packages to send messages and process delivery feedback. it will get a config option to use the mox webapi/webhooks instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and developing that shaped development of the mox webapi/webhooks. for issue #31 by cuu508
2024-04-15 22:49:02 +03:00
{
"Name": "OutgoingEvent",
"Docs": "OutgoingEvent is an activity for an outgoing delivery. Either generated by the\nqueue, or through an incoming DSN (delivery status notification) message.",
"Values": [
{
"Name": "EventDelivered",
"Value": "delivered",
"Docs": "Message was accepted by a next-hop server. This does not necessarily mean the\nmessage has been delivered in the mailbox of the user."
},
{
"Name": "EventSuppressed",
"Value": "suppressed",
"Docs": "Outbound delivery was suppressed because the recipient address is on the\nsuppression list of the account, or a simplified/base variant of the address is."
},
{
"Name": "EventDelayed",
"Value": "delayed",
"Docs": "A delivery attempt failed but delivery will be retried again later."
},
{
"Name": "EventFailed",
"Value": "failed",
"Docs": "Delivery of the message failed and will not be tried again. Also see the\n\"Suppressing\" field of [Outgoing]."
},
{
"Name": "EventRelayed",
"Value": "relayed",
"Docs": "Message was relayed into a system that does not generate DSNs. Should only\nhappen when explicitly requested."
},
{
"Name": "EventExpanded",
"Value": "expanded",
"Docs": "Message was accepted and is being delivered to multiple recipients (e.g. the\naddress was an alias/list), which may generate more DSNs."
},
{
"Name": "EventCanceled",
"Value": "canceled",
"Docs": "Message was removed from the queue, e.g. canceled by admin/user."
},
{
"Name": "EventUnrecognized",
"Value": "unrecognized",
"Docs": "An incoming message was received that was either a DSN with an unknown event\ntype (\"action\"), or an incoming non-DSN-message was received for the unique\nper-outgoing-message address used for sending."
}
]
replace http basic auth for web interfaces with session cookie & csrf-based auth the http basic auth we had was very simple to reason about, and to implement. but it has a major downside: there is no way to logout, browsers keep sending credentials. ideally, browsers themselves would show a button to stop sending credentials. a related downside: the http auth mechanism doesn't indicate for which server paths the credentials are. another downside: the original password is sent to the server with each request. though sending original passwords to web servers seems to be considered normal. our new approach uses session cookies, along with csrf values when we can. the sessions are server-side managed, automatically extended on each use. this makes it easy to invalidate sessions and keeps the frontend simpler (than with long- vs short-term sessions and refreshing). the cookies are httponly, samesite=strict, scoped to the path of the web interface. cookies are set "secure" when set over https. the cookie is set by a successful call to Login. a call to Logout invalidates a session. changing a password invalidates all sessions for a user, but keeps the session with which the password was changed alive. the csrf value is also random, and associated with the session cookie. the csrf must be sent as header for api calls, or as parameter for direct form posts (where we cannot set a custom header). rest-like calls made directly by the browser, e.g. for images, don't have a csrf protection. the csrf value is returned by the Login api call and stored in localstorage. api calls without credentials return code "user:noAuth", and with bad credentials return "user:badAuth". the api client recognizes this and triggers a login. after a login, all auth-failed api calls are automatically retried. only for "user:badAuth" is an error message displayed in the login form (e.g. session expired). in an ideal world, browsers would take care of most session management. a server would indicate authentication is needed (like http basic auth), and the browsers uses trusted ui to request credentials for the server & path. the browser could use safer mechanism than sending original passwords to the server, such as scram, along with a standard way to create sessions. for now, web developers have to do authentication themselves: from showing the login prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are sent with each request. webauthn is a newer way to do authentication, perhaps we'll implement it in the future. though hardware tokens aren't an attractive option for many users, and it may be overkill as long as we still do old-fashioned authentication in smtp & imap where passwords can be sent to the server. for issue #58
2024-01-04 15:10:48 +03:00
}
],
2023-01-30 16:27:06 +03:00
"SherpaVersion": 0,
"SherpadocVersion": 1
}