2023-01-30 16:27:06 +03:00
<!doctype html>
< html >
< head >
< title > Mox Admin< / title >
< meta charset = "utf-8" / >
< meta name = "viewport" content = "width=device-width, initial-scale=1" / >
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.
one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes. keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.
the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys. keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).
the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend. since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls. the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used. the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.
authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.
the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):
WebmailHTTP:
Enabled: true
WebmailHTTPS:
Enabled: true
special thanks to liesbeth, gerben, andrii for early user feedback.
there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 22:57:03 +03:00
< link rel = "icon" href = "noNeedlessFaviconRequestsPlease:" / >
2023-01-30 16:27:06 +03:00
< style >
body, html { padding: 1em; font-size: 16px; }
* { font-size: inherit; font-family: ubuntu, lato, sans-serif; margin: 0; padding: 0; box-sizing: border-box; }
h1, h2, h3, h4 { margin-bottom: 1ex; }
h1 { font-size: 1.2rem; }
h2 { font-size: 1.1rem; }
h3, h4 { font-size: 1rem; }
ul { padding-left: 1rem; }
2023-08-23 16:10:02 +03:00
.literal { background-color: #eee; padding: .5em 1em; margin: 1ex 0; border: 1px solid #eee; border-radius: 4px; white-space: pre-wrap; font-family: monospace; font-size: 15px; tab-size: 4; }
2023-01-30 16:27:06 +03:00
table td, table th { padding: .2em .5em; }
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
table table td, table table th { padding: 0 0.1em; }
table.long >tbody >tr >td { padding: 1em .5em; }
table.long td { vertical-align: top; }
2023-01-30 16:27:06 +03:00
table > tbody > tr:nth-child(odd) { background-color: #f8f8f8; }
2023-02-27 23:29:27 +03:00
.text { max-width: 50em; }
2023-01-30 16:27:06 +03:00
p { margin-bottom: 1em; max-width: 50em; }
[title] { text-decoration: underline; text-decoration-style: dotted; }
fieldset { border: 0; }
#page { opacity: 1; animation: fadein 0.15s ease-in; }
#page.loading { opacity: 0.1; animation: fadeout 1s ease-out; }
@keyframes fadein { 0% { opacity: 0 } 100% { opacity: 1 } }
@keyframes fadeout { 0% { opacity: 1 } 100% { opacity: 0.1 } }
< / style >
< script src = "api/sherpa.js" > < / script >
2023-02-25 12:55:30 +03:00
< script > api . _sherpa . baseurl = 'api/' < / script >
2023-01-30 16:27:06 +03:00
< / head >
< body >
< div id = "page" > Loading...< / div >
< script >
const [dom, style, attr, prop] = (function() {
function _domKids(e, ...kl) {
kl.forEach(k => {
if (typeof k === 'string' || k instanceof String) {
e.appendChild(document.createTextNode(k))
} else if (k instanceof Node) {
e.appendChild(k)
} else if (Array.isArray(k)) {
_domKids(e, ...k)
} else if (typeof k === 'function') {
if (!k.name) {
throw new Error('function without name', k)
}
e.addEventListener(k.name, k)
} else if (typeof k === 'object' & & k !== null) {
if (k.root) {
e.appendChild(k.root)
return
}
for (const key in k) {
const value = k[key]
if (key === '_prop') {
for (const prop in value) {
e[prop] = value[prop]
}
} else if (key === '_attr') {
for (const prop in value) {
e.setAttribute(prop, value[prop])
}
} else if (key === '_listen') {
e.addEventListener(...value)
} else {
e.style[key] = value
}
}
} else {
console.log('bad kid', k)
throw new Error('bad kid')
}
})
}
const _dom = (kind, ...kl) => {
const t = kind.split('.')
const e = document.createElement(t[0])
for (let i = 1; i < t.length ; i + + ) {
e.classList.add(t[i])
}
_domKids(e, kl)
return e
}
_dom._kids = function(e, ...kl) {
while(e.firstChild) {
e.removeChild(e.firstChild)
}
_domKids(e, kl)
}
const dom = new Proxy(_dom, {
get: function(dom, prop) {
if (prop in dom) {
return dom[prop]
}
const fn = (...kl) => _dom(prop, kl)
dom[prop] = fn
return fn
},
apply: function(target, that, args) {
if (args.length === 1 & & typeof args[0] === 'object' & & !Array.isArray(args[0])) {
return {_attr: args[0]}
}
return _dom(...args)
},
})
const style = x => x
const attr = x => { return {_attr: x} }
const prop = x => { return {_prop: x} }
return [dom, style, attr, prop]
})()
const green = '#1dea20'
const yellow = '#ffe400'
const red = '#ff7443'
2023-02-16 11:57:27 +03:00
const blue = '#8bc8ff'
2023-01-30 16:27:06 +03:00
2023-02-01 23:53:43 +03:00
const link = (href, anchorOpt) => dom.a(attr({href: href, rel: 'noopener noreferrer'}), anchorOpt || href)
2023-01-30 16:27:06 +03:00
const crumblink = (text, link) => dom.a(text, attr({href: link}))
const crumbs = (...l) => [dom.h1(l.map((e, index) => index === 0 ? e : [' / ', e])), dom.br()]
const footer = dom.div(
style({marginTop: '6ex', opacity: 0.75}),
2023-02-01 23:53:43 +03:00
link('https://github.com/mjl-/mox', 'mox'),
2023-01-30 16:27:06 +03:00
' ',
api._sherpa.version,
)
const age = (date, future, nowSecs) => {
if (!nowSecs) {
nowSecs = new Date().getTime()/1000
}
let t = nowSecs - date.getTime()/1000
let negative = false
if (t < 0 ) {
negative = true
t = -t
}
const minute = 60
const hour = 60*minute
const day = 24*hour
const month = 30*day
const year = 365*day
const periods = [year, month, day, hour, minute, 1]
const suffix = ['y', 'm', 'd', 'h', 'mins', 's']
let l = []
for (let i = 0; i < periods.length ; i + + ) {
const p = periods[i]
if (t >= 2*p || i == periods.length-1) {
const n = Math.floor(t/p)
l.push('' + n + suffix[i])
t -= n*p
if (l.length >= 2) {
break
}
}
}
let s = l.join(' ')
if (!future || !negative) {
s += ' ago'
}
return dom.span(attr({title: date.toString()}), s)
}
const domainName = d => {
return d.Unicode || d.ASCII
}
const domainString = d => {
if (d.Unicode) {
return d.Unicode+" ("+d.ASCII+")"
}
return d.ASCII
}
const ipdomainString = ipd => {
if (ipd.IP.length > 0) {
// todo: properly format
return ipd.IP.join('.')
}
return domainString(ipd.Domain)
}
const formatSize = n => {
if (n > 10*1024*1024) {
return Math.round(n/(1024*1024)) + ' mb'
} else if (n > 500) {
return Math.round(n/1024) + ' kb'
}
return n + ' bytes'
}
const index = async () => {
2023-02-27 17:03:37 +03:00
const [domains, queueSize, checkUpdatesEnabled] = await Promise.all([
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
api.Domains(),
api.QueueSize(),
api.CheckUpdatesEnabled(),
2023-02-08 21:42:21 +03:00
])
2023-01-30 16:27:06 +03:00
let fieldset, domain, account, localpart
const page = document.getElementById('page')
dom._kids(page,
crumbs('Mox Admin'),
2023-02-27 17:03:37 +03:00
checkUpdatesEnabled ? [] : dom.p(box(yellow, 'Warning: Checking for updates has not been enabled in mox.conf (CheckUpdates: true).', dom.br(), 'Make sure you stay up to date through another mechanism!', dom.br(), 'You have a responsibility to keep the internet-connected software you run up to date and secure!', dom.br(), 'See ', link('https://updates.xmox.nl/changelog'))),
2023-01-30 16:27:06 +03:00
dom.p(
dom.a('Accounts', attr({href: '#accounts'})), dom.br(),
2023-02-08 21:42:21 +03:00
dom.a('Queue', attr({href: '#queue'})), ' ('+queueSize+')', dom.br(),
2023-01-30 16:27:06 +03:00
),
dom.h2('Domains'),
domains.length === 0 ? box(red, 'No domains') :
dom.ul(
domains.map(d => dom.li(dom.a(attr({href: '#domains/'+domainName(d)}), domainString(d)))),
),
dom.br(),
dom.h2('Add domain'),
dom.form(
async function submit(e) {
e.preventDefault()
e.stopPropagation()
fieldset.disabled = true
try {
await api.DomainAdd(domain.value, account.value, localpart.value)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
2023-02-06 17:23:33 +03:00
} finally {
fieldset.disabled = false
2023-01-30 16:27:06 +03:00
}
window.location.hash = '#domains/' + domain.value
},
fieldset=dom.fieldset(
dom.label(
style({display: 'inline-block'}),
'Domain',
dom.br(),
domain=dom.input(attr({required: ''})),
),
' ',
dom.label(
style({display: 'inline-block'}),
'Postmaster/reporting account',
dom.br(),
account=dom.input(attr({required: ''})),
),
' ',
dom.label(
style({display: 'inline-block'}),
dom.span('Localpart (optional)', attr({title: 'Must be set if and only if account does not yet exist. The localpart for the user of this domain. E.g. postmaster.'})),
dom.br(),
localpart=dom.input(),
),
' ',
dom.button('Add domain', attr({title: 'Domain will be added and the config reloaded. You should add the required DNS records after adding the domain.'})),
),
),
dom.br(),
2023-11-01 19:55:40 +03:00
dom.h2('Reports'),
dom.div(dom.a('DMARC', attr({href: '#dmarc/reports'}))),
2023-01-30 16:27:06 +03:00
dom.div(dom.a('TLS', attr({href: '#tlsrpt'}))),
2023-11-01 19:55:40 +03:00
dom.br(),
dom.h2('Operations'),
2023-01-30 16:27:06 +03:00
dom.div(dom.a('MTA-STS policies', attr({href: '#mtasts'}))),
2023-11-01 19:55:40 +03:00
dom.div(dom.a('DMARC evaluations', attr({href: '#dmarc/evaluations'}))),
2023-01-30 16:27:06 +03:00
// todo: outgoing TLSRPT findings
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
// todo: routing, globally, per domain and per account
2023-01-30 16:27:06 +03:00
dom.br(),
dom.h2('DNS blocklist status'),
dom.div(dom.a('DNSBL status', attr({href: '#dnsbl'}))),
dom.br(),
dom.h2('Configuration'),
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
dom.div(dom.a('Webserver', attr({href: '#webserver'}))),
dom.div(dom.a('Files', attr({href: '#config'}))),
2023-02-06 17:17:46 +03:00
dom.div(dom.a('Log levels', attr({href: '#loglevels'}))),
2023-01-30 16:27:06 +03:00
footer,
)
}
const config = async () => {
const [staticPath, dynamicPath, staticText, dynamicText] = await api.ConfigFiles()
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'Config',
),
dom.h2(staticPath),
dom('pre.literal', staticText),
dom.h2(dynamicPath),
dom('pre.literal', dynamicText),
)
}
2023-02-06 17:17:46 +03:00
const loglevels = async () => {
const loglevels = await api.LogLevels()
const levels = ['error', 'info', 'debug', 'trace', 'traceauth', 'tracedata']
let form, fieldset, pkg, level
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'Log levels',
),
dom.p('Note: changing a log level here only changes it for the current process. When mox restarts, it sets the log levels from the configuration file. Change mox.conf to keep the changes.'),
dom.table(
dom.thead(
dom.tr(
dom.th('Package', attr({title: 'Log levels can be configured per package. E.g. smtpserver, imapserver, dkim, dmarc, tlsrpt, etc.'})),
dom.th('Level', attr({title: 'If you set the log level to "trace", imap and smtp protocol transcripts will be logged. Sensitive authentication is replaced with "***" unless the level is >= "traceauth". Data is masked with "..." unless the level is "tracedata".'})),
dom.th('Action'),
),
),
dom.tbody(
Object.entries(loglevels).map(t => {
let lvl
return dom.tr(
dom.td(t[0] || '(default)'),
dom.td(
lvl=dom.select(levels.map(l => dom.option(l, t[1] === l ? attr({selected: ''}) : []))),
),
dom.td(
dom.button('Save', attr({title: 'Set new log level for package.'}), async function click(e) {
e.preventDefault()
try {
e.target.disabled = true
await api.LogLevelSet(t[0], lvl.value)
} catch (err) {
console.log({err})
window.alert('Error: ' + err)
return
} finally {
e.target.disabled = false
}
window.location.reload() // todo: reload just the current loglevels
}),
' ',
dom.button('Remove', attr({title: 'Remove this log level, the default log level will apply.'}), t[0] === '' ? attr({disabled: ''}) : [], async function click(e) {
e.preventDefault()
try {
e.target.disabled = true
await api.LogLevelRemove(t[0])
} catch (err) {
console.log({err})
window.alert('Error: ' + err)
return
} finally {
e.target.disabled = false
}
window.location.reload() // todo: reload just the current loglevels
}),
),
)
}),
),
),
dom.br(),
dom.h2('Add log level setting'),
form=dom.form(
async function submit(e) {
e.preventDefault()
e.stopPropagation()
fieldset.disabled = true
try {
await api.LogLevelSet(pkg.value, level.value)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
} finally {
fieldset.disabled = false
}
form.reset()
window.location.reload() // todo: reload just the current loglevels
},
fieldset=dom.fieldset(
dom.label(
style({display: 'inline-block'}),
'Package',
dom.br(),
pkg=dom.input(attr({required: ''})),
),
' ',
dom.label(
style({display: 'inline-block'}),
'Level',
dom.br(),
level=dom.select(
attr({required: ''}),
levels.map(l => dom.option(l, l === 'debug' ? attr({selected: ''}) : [])),
),
),
' ',
dom.button('Add'),
),
dom.br(),
dom.p('Suggestions for packages: autotls dkim dmarc dmarcdb dns dnsbl dsn http imapserver iprev junk message metrics mox moxio mtasts mtastsdb publicsuffix queue sendmail serve smtpserver spf store subjectpass tlsrpt tlsrptdb updates'),
),
)
}
2023-01-30 16:27:06 +03:00
const box = (color, ...l) => [
dom.div(
style({
display: 'inline-block',
padding: '.25em .5em',
backgroundColor: color,
borderRadius: '3px',
margin: '.5ex 0',
}),
l,
),
dom.br(),
]
2023-11-01 19:55:40 +03:00
const inlineBox = (color, ...l) =>
dom.span(
style({
display: 'inline-block',
padding: color ? '0.05em 0.2em' : '',
backgroundColor: color,
borderRadius: '3px',
}),
l,
)
2023-01-30 16:27:06 +03:00
const accounts = async () => {
const accounts = await api.Accounts()
let fieldset, account, email
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'Accounts',
),
dom.h2('Accounts'),
accounts.length === 0 ? dom.p('No accounts') :
dom.ul(
accounts.map(s => dom.li(dom.a(s, attr({href: '#accounts/'+s})))),
),
dom.br(),
dom.h2('Add account'),
dom.form(
async function submit(e) {
e.preventDefault()
e.stopPropagation()
fieldset.disabled = true
try {
await api.AccountAdd(account.value, email.value)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
2023-02-06 17:23:33 +03:00
} finally {
fieldset.disabled = false
2023-01-30 16:27:06 +03:00
}
window.location.hash = '#accounts/'+account.value
},
fieldset=dom.fieldset(
dom.label(
style({display: 'inline-block'}),
'Account name',
dom.br(),
account=dom.input(attr({required: ''})),
),
' ',
dom.label(
style({display: 'inline-block'}),
'Email address',
dom.br(),
email=dom.input(attr({type: 'email', required: ''})),
),
' ',
dom.button('Add account', attr({title: 'The account will be added and the config reloaded.'})),
)
)
)
}
const account = async (name) => {
const config = await api.Account(name)
let form, fieldset, email
2023-03-28 21:50:36 +03:00
let formSendlimits, fieldsetSendlimits, maxOutgoingMessagesPerDay, maxFirstTimeRecipientsPerDay
2023-02-27 23:29:27 +03:00
let formPassword, fieldsetPassword, password, passwordHint
2023-01-30 16:27:06 +03:00
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('Accounts', '#accounts'),
name,
),
dom.div(
'Default domain: ',
config.Domain ? dom.a(config.Domain, attr({href: '#domains/'+config.Domain})) : '(none)',
),
dom.br(),
dom.h2('Addresses'),
dom.table(
dom.thead(
dom.tr(
dom.th('Address'), dom.th('Action'),
),
),
dom.tbody(
Object.keys(config.Destinations).map(k => {
let v = k
const t = k.split('@')
if (t.length > 1) {
const d = t[t.length-1]
const lp = t.slice(0, t.length-1).join('@')
v = [
lp, '@',
dom.a(d, attr({href: '#domains/'+d})),
]
2023-03-29 22:11:43 +03:00
if (lp === '') {
v.unshift('(catchall) ')
}
2023-01-30 16:27:06 +03:00
}
return dom.tr(
dom.td(v),
dom.td(
dom.button('Remove', async function click(e) {
e.preventDefault()
if (!window.confirm('Are you sure you want to remove this address?')) {
return
}
e.target.disabled = true
try {
let addr = k
if (!addr.includes('@')) {
addr += '@' + config.Domain
}
await api.AddressRemove(addr)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
2023-02-06 17:23:33 +03:00
} finally {
e.target.disabled = false
2023-01-30 16:27:06 +03:00
}
window.location.reload() // todo: reload just the list
}),
),
)
})
),
),
dom.br(),
dom.h2('Add address'),
form=dom.form(
async function submit(e) {
e.preventDefault()
e.stopPropagation()
fieldset.disabled = true
try {
let addr = email.value
if (!addr.includes('@')) {
if (!config.Domain) {
throw new Error('no default domain configured for account')
}
addr += '@' + config.Domain
}
await api.AddressAdd(addr, name)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
2023-02-06 17:23:33 +03:00
} finally {
fieldset.disabled = false
2023-01-30 16:27:06 +03:00
}
form.reset()
window.location.reload() // todo: only reload the destinations
},
fieldset=dom.fieldset(
dom.label(
style({display: 'inline-block'}),
2023-03-29 22:11:43 +03:00
dom.span('Email address or localpart', attr({title: 'If empty, or localpart is empty, a catchall address is configured for the domain.'})),
2023-01-30 16:27:06 +03:00
dom.br(),
2023-03-29 22:11:43 +03:00
email=dom.input(),
2023-01-30 16:27:06 +03:00
),
' ',
dom.button('Add address'),
),
),
dom.br(),
2023-03-28 21:50:36 +03:00
dom.h2('Send limits'),
formSendlimits=dom.form(
fieldsetSendlimits=dom.fieldset(
dom.label(
style({display: 'inline-block'}),
dom.span('Maximum outgoing messages per day', attr({title: 'Maximum number of outgoing messages for this account in a 24 hour window. This limits the damage to recipients and the reputation of this mail server in case of account compromise. Default 1000. MaxOutgoingMessagesPerDay in configuration file.'})),
dom.br(),
maxOutgoingMessagesPerDay=dom.input(attr({type: 'number', required: '', value: config.MaxOutgoingMessagesPerDay || 1000})),
),
' ',
dom.label(
style({display: 'inline-block'}),
dom.span('Maximum first-time recipients per day', attr({title: 'Maximum number of first-time recipients in outgoing messages for this account in a 24 hour window. This limits the damage to recipients and the reputation of this mail server in case of account compromise. Default 200. MaxFirstTimeRecipientsPerDay in configuration file.'})),
dom.br(),
maxFirstTimeRecipientsPerDay=dom.input(attr({type: 'number', required: '', value: config.MaxFirstTimeRecipientsPerDay || 200})),
),
' ',
dom.button('Save'),
),
async function submit(e) {
e.stopPropagation()
e.preventDefault()
fieldsetSendlimits.disabled = true
try {
await api.SetAccountLimits(name, parseInt(maxOutgoingMessagesPerDay.value) || 0, parseInt(maxFirstTimeRecipientsPerDay.value) || 0)
window.alert('Send limits saved.')
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
} finally {
fieldsetSendlimits.disabled = false
}
},
),
dom.br(),
2023-01-30 16:27:06 +03:00
dom.h2('Set new password'),
formPassword=dom.form(
fieldsetPassword=dom.fieldset(
dom.label(
style({display: 'inline-block'}),
'New password',
dom.br(),
2023-02-27 23:29:27 +03:00
password=dom.input(attr({type: 'password', required: ''}), function focus() {
passwordHint.style.display = ''
}),
2023-01-30 16:27:06 +03:00
),
' ',
dom.button('Change password'),
),
2023-02-27 23:29:27 +03:00
passwordHint=dom.div(
style({display: 'none', marginTop: '.5ex'}),
dom.button('Generate random password', attr({type: 'button'}), function click(e) {
e.preventDefault()
let b = new Uint8Array(1)
let s = ''
const chars = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*-_; :,< . > /'
while (s.length < 12 ) {
self.crypto.getRandomValues(b)
if (Math.ceil(b[0]/chars.length)*chars.length > 255) {
continue // Prevent bias.
}
s += chars[b[0]%chars.length]
}
password.type = 'text'
password.value = s
}),
dom('div.text',
box(yellow, 'Important: Bots will try to bruteforce your password. Connections with failed authentication attempts will be rate limited but attackers WILL find weak passwords. If your account is compromised, spammers are likely to abuse your system, spamming your address and the wider internet in your name. So please pick a random, unguessable password, preferrably at least 12 characters.'),
),
),
2023-01-30 16:27:06 +03:00
async function submit(e) {
e.stopPropagation()
e.preventDefault()
fieldsetPassword.disabled = true
try {
await api.SetPassword(name, password.value)
window.alert('Password has been changed.')
formPassword.reset()
} catch (err) {
2023-02-06 17:23:33 +03:00
console.log({err})
2023-01-30 16:27:06 +03:00
window.alert('Error: ' + err.message)
2023-02-06 17:23:33 +03:00
return
2023-01-30 16:27:06 +03:00
} finally {
fieldsetPassword.disabled = false
}
},
),
dom.br(),
dom.h2('Danger'),
dom.button('Remove account', async function click(e) {
e.preventDefault()
if (!window.confirm('Are you sure you want to remove this account?')) {
return
}
e.target.disabled = true
try {
await api.AccountRemove(name)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
2023-02-06 17:23:33 +03:00
} finally {
e.target.disabled = false
2023-01-30 16:27:06 +03:00
}
window.location.hash = '#accounts'
}),
)
}
const domain = async (d) => {
const end = new Date().toISOString()
const start = new Date(new Date().getTime() - 30*24*3600*1000).toISOString()
2023-09-23 13:05:40 +03:00
const [dmarcSummaries, tlsrptSummaries, localpartAccounts, dnsdomain, clientConfigs] = await Promise.all([
2023-01-30 16:27:06 +03:00
api.DMARCSummaries(start, end, d),
api.TLSRPTSummaries(start, end, d),
api.DomainLocalparts(d),
api.Domain(d),
2023-09-23 13:05:40 +03:00
api.ClientConfigsDomain(d),
2023-01-30 16:27:06 +03:00
])
let form, fieldset, localpart, account
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'Domain ' + domainString(dnsdomain),
),
dom.ul(
dom.li(dom.a('Required DNS records', attr({href: '#domains/' + d + '/dnsrecords'}))),
dom.li(dom.a('Check current actual DNS records and domain configuration', attr({href: '#domains/' + d + '/dnscheck'}))),
),
dom.br(),
dom.h2('Client configuration'),
dom.div('If autoconfig/autodiscover does not work with an email client, use the settings below for this domain. Authenticate with email address and password.'),
dom.table(
dom.thead(
2023-02-01 23:53:43 +03:00
dom.tr(
dom.th('Protocol'), dom.th('Host'), dom.th('Port'), dom.th('Listener'), dom.th('Note'),
2023-01-30 16:27:06 +03:00
),
),
dom.tbody(
2023-09-23 13:05:40 +03:00
clientConfigs.Entries.map(e =>
2023-01-30 16:27:06 +03:00
dom.tr(
dom.td(e.Protocol),
dom.td(domainString(e.Host)),
dom.td(''+e.Port),
dom.td(''+e.Listener),
dom.td(''+e.Note),
)
),
),
),
dom.br(),
dom.h2('DMARC aggregate reports summary'),
renderDMARCSummaries(dmarcSummaries),
dom.br(),
dom.h2('TLS reports summary'),
renderTLSRPTSummaries(tlsrptSummaries),
dom.br(),
dom.h2('Addresses'),
dom.table(
dom.thead(
dom.tr(
dom.th('Address'), dom.th('Account'), dom.th('Action'),
),
),
dom.tbody(
Object.entries(localpartAccounts).map(t =>
dom.tr(
2023-03-29 22:11:43 +03:00
dom.td(t[0] || '(catchall)'),
2023-01-30 16:27:06 +03:00
dom.td(dom.a(t[1], attr({href: '#accounts/'+t[1]}))),
dom.td(
2023-09-23 13:05:40 +03:00
dom.button('Remove', async function click(e) {
2023-01-30 16:27:06 +03:00
e.preventDefault()
if (!window.confirm('Are you sure you want to remove this address?')) {
return
}
e.target.disabled = true
try {
2023-03-29 22:11:43 +03:00
await api.AddressRemove(t[0] + '@' + d)
2023-01-30 16:27:06 +03:00
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
2023-02-06 17:23:33 +03:00
} finally {
e.target.disabled = false
2023-01-30 16:27:06 +03:00
}
window.location.reload() // todo: only reload the localparts
}),
),
),
),
),
),
dom.br(),
dom.h2('Add address'),
form=dom.form(
async function submit(e) {
e.preventDefault()
e.stopPropagation()
fieldset.disabled = true
try {
await api.AddressAdd(localpart.value+'@'+d, account.value)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
2023-02-06 17:23:33 +03:00
} finally {
fieldset.disabled = false
2023-01-30 16:27:06 +03:00
}
form.reset()
window.location.reload() // todo: only reload the addresses
},
fieldset=dom.fieldset(
dom.label(
style({display: 'inline-block'}),
2023-03-29 22:11:43 +03:00
dom.span('Localpart', attr({title: 'An empty localpart is the catchall destination/address for the domain.'})),
2023-01-30 16:27:06 +03:00
dom.br(),
2023-03-29 22:11:43 +03:00
localpart=dom.input(),
2023-01-30 16:27:06 +03:00
),
' ',
dom.label(
style({display: 'inline-block'}),
'Account',
dom.br(),
account=dom.input(attr({required: ''})),
),
' ',
dom.button('Add address', attr({title: 'Address will be added and the config reloaded.'})),
),
),
dom.br(),
dom.h2('External checks'),
dom.ul(
2023-02-01 23:53:43 +03:00
dom.li(link('https://internet.nl/mail/'+dnsdomain.ASCII+'/', 'Check configuration at internet.nl')),
2023-01-30 16:27:06 +03:00
),
dom.br(),
dom.h2('Danger'),
dom.button('Remove domain', async function click(e) {
e.preventDefault()
if (!window.confirm('Are you sure you want to remove this domain?')) {
return
}
e.target.disabled = true
try {
await api.DomainRemove(d)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
2023-02-06 17:23:33 +03:00
} finally {
e.target.disabled = false
2023-01-30 16:27:06 +03:00
}
window.location.hash = '#'
}),
)
}
const domainDNSRecords = async (d) => {
const [records, dnsdomain] = await Promise.all([
api.DomainRecords(d),
api.Domain(d),
])
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('Domain ' + domainString(dnsdomain), '#domains/'+d),
'DNS Records',
),
dom.h1('Required DNS records'),
2023-08-23 16:10:02 +03:00
dom('pre.literal', style({maxWidth: '70em'}), records.join('\n')),
2023-01-30 16:27:06 +03:00
dom.br(),
)
}
2023-08-23 16:10:02 +03:00
2023-01-30 16:27:06 +03:00
const domainDNSCheck = async (d) => {
const [checks, dnsdomain] = await Promise.all([
api.CheckDomain(d),
api.Domain(d),
])
const empty = l => !l || !l.length
const resultSection = (title, r, details) => {
let success = []
if (empty(r.Errors) & & empty(r.Warnings)) {
success = box(green, 'OK')
}
const errors = empty(r.Errors) ? [] : box(red, dom.ul(style({marginLeft: '1em'}), r.Errors.map(s => dom.li(s))))
const warnings = empty(r.Warnings) ? [] : box(yellow, dom.ul(style({marginLeft: '1em'}), r.Warnings.map(s => dom.li(s))))
let instructions = []
if (!empty(r.Instructions)) {
instructions = dom.div(style({margin: '.5ex 0'}))
const instrs = [
r.Instructions.map(s => [
2023-08-23 16:10:02 +03:00
dom('pre.literal', style({display: 'inline-block', maxWidth: '60em'}), s),
2023-01-30 16:27:06 +03:00
dom.br(),
]),
]
if (empty(r.Errors)) {
dom._kids(instructions,
dom.div(
dom.a('Show instructions', attr({href: '#'}), function click(e) {
e.preventDefault()
dom._kids(instructions, instrs)
}),
dom.br(),
)
)
} else {
dom._kids(instructions, instrs)
}
}
return [
dom.h2(title),
success,
errors,
warnings,
details,
dom.br(),
instructions,
dom.br(),
]
}
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
const detailsDNSSEC = ''
2023-02-03 17:54:34 +03:00
const detailsIPRev = !checks.IPRev.IPNames || !Object.entries(checks.IPRev.IPNames).length ? [] : [
dom.div('Hostname: ' + domainString(checks.IPRev.Hostname)),
dom.table(
dom.tr(dom.th('IP'), dom.th('Addresses')),
Object.entries(checks.IPRev.IPNames).sort().map(t =>
2023-03-02 22:07:02 +03:00
dom.tr(dom.td(t[0]), dom.td((t[1] || []).join(', '))),
2023-02-03 17:54:34 +03:00
)
),
]
2023-01-30 16:27:06 +03:00
const detailsMX = empty(checks.MX.Records) ? [] : [
dom.table(
2023-02-01 23:53:43 +03:00
dom.tr(dom.th('Preference'), dom.th('Host'), dom.th('IPs')),
2023-01-30 16:27:06 +03:00
checks.MX.Records.map(mx =>
2023-02-01 23:53:43 +03:00
dom.tr(dom.td(''+mx.Pref), dom.td(mx.Host), dom.td((mx.IPs || []).join(', '))),
2023-01-30 16:27:06 +03:00
)
),
]
const detailsTLS = ''
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
const detailsDANE = ''
2023-01-30 16:27:06 +03:00
const detailsSPF = [
checks.SPF.DomainTXT ? [dom.div('Domain TXT record: ' + checks.SPF.DomainTXT)] : [],
checks.SPF.HostTXT ? [dom.div('Host TXT record: ' + checks.SPF.HostTXT)] : [],
]
const detailsDKIM = empty(checks.DKIM.Records) ? [] : [
dom.table(
2023-02-01 23:53:43 +03:00
dom.tr(dom.th('Selector'), dom.th('TXT record')),
2023-01-30 16:27:06 +03:00
checks.DKIM.Records.map(rec =>
2023-02-01 23:53:43 +03:00
dom.tr(dom.td(rec.Selector), dom.td(rec.TXT)),
2023-01-30 16:27:06 +03:00
),
)
]
const detailsDMARC = !checks.DMARC.Domain ? [] : [
dom.div('Domain: ' + checks.DMARC.Domain),
!checks.DMARC.TXT ? [] : dom.div('TXT record: ' + checks.DMARC.TXT),
]
const detailsTLSRPT = !checks.TLSRPT.TXT ? [] : [
dom.div('TXT record: ' + checks.TLSRPT.TXT),
]
2023-10-14 23:42:26 +03:00
const detailsMTASTS = !checks.MTASTS.TXT & & !checks.MTASTS.PolicyText ? [] : [
2023-01-30 16:27:06 +03:00
!checks.MTASTS.TXT ? [] : dom.div('MTA-STS record: ' + checks.MTASTS.TXT),
2023-08-23 16:10:02 +03:00
!checks.MTASTS.PolicyText ? [] : dom.div('MTA-STS policy: ', dom('pre.literal', style({maxWidth: '60em'}), checks.MTASTS.PolicyText)),
2023-01-30 16:27:06 +03:00
]
const detailsSRVConf = !Object.entries(checks.SRVConf.SRVs) ? [] : [
dom.table(
2023-02-01 23:53:43 +03:00
dom.tr(dom.th('Service'), dom.th('Priority'), dom.th('Weight'), dom.th('Port'), dom.th('Host')),
2023-01-30 16:27:06 +03:00
Object.entries(checks.SRVConf.SRVs).map(t => {
const l = t[1]
if (!l || !l.length) {
2023-02-01 23:53:43 +03:00
return dom.tr(dom.td(t[0]), dom.td(attr({attr: '4'}), '(none)'))
2023-01-30 16:27:06 +03:00
}
2023-02-01 23:53:43 +03:00
return t[1].map(r => dom.tr([t[0], r.Priority, r.Weight, r.Port, r.Target].map(s => dom.td(''+s))))
2023-01-30 16:27:06 +03:00
}),
),
]
const detailsAutoconf = !checks.Autoconf.IPs ? [] : [
dom.div('IPs: ' + checks.Autoconf.IPs.join(', ')),
]
const detailsAutodiscover = !checks.Autodiscover.Records ? [] : [
dom.table(
2023-02-01 23:53:43 +03:00
dom.tr(dom.th('Host'), dom.th('Port'), dom.th('Priority'), dom.th('Weight'), dom.th('IPs')),
2023-01-30 16:27:06 +03:00
checks.Autodiscover.Records.map(r =>
2023-02-01 23:53:43 +03:00
dom.tr([r.Target, r.Port, r.Priority, r.Weight, (r.IPs || []).join(', ')].map(s => dom.td(''+s)))
2023-01-30 16:27:06 +03:00
),
),
]
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('Domain ' + domainString(dnsdomain), '#domains/'+d),
'Check DNS',
),
dom.h1('DNS records and domain configuration check'),
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
resultSection('DNSSEC', checks.DNSSEC, detailsDNSSEC),
2023-02-03 17:54:34 +03:00
resultSection('IPRev', checks.IPRev, detailsIPRev),
2023-01-30 16:27:06 +03:00
resultSection('MX', checks.MX, detailsMX),
resultSection('TLS', checks.TLS, detailsTLS),
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.
dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.
but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.
mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.
mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.
with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 13:09:35 +03:00
resultSection('DANE', checks.DANE, detailsDANE),
2023-01-30 16:27:06 +03:00
resultSection('SPF', checks.SPF, detailsSPF),
resultSection('DKIM', checks.DKIM, detailsDKIM),
resultSection('DMARC', checks.DMARC, detailsDMARC),
resultSection('TLSRPT', checks.TLSRPT, detailsTLSRPT),
resultSection('MTA-STS', checks.MTASTS, detailsMTASTS),
resultSection('SRV conf', checks.SRVConf, detailsSRVConf),
resultSection('Autoconf', checks.Autoconf, detailsAutoconf),
resultSection('Autodiscover', checks.Autodiscover, detailsAutodiscover),
dom.br(),
)
}
2023-11-01 19:55:40 +03:00
const dmarcIndex = async () => {
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'DMARC reports and evaluations',
),
dom.ul(
dom.li(
dom.a(attr({href: '#dmarc/reports'}), 'Reports'), ', incoming DMARC aggregate reports.',
),
dom.li(
dom.a(attr({href: '#dmarc/evaluations'}), 'Evaluations'), ', for outgoing DMARC aggregate reports.',
),
),
)
}
const dmarcReports = async () => {
2023-01-30 16:27:06 +03:00
const end = new Date().toISOString()
const start = new Date(new Date().getTime() - 30*24*3600*1000).toISOString()
const summaries = await api.DMARCSummaries(start, end, "")
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
2023-11-01 19:55:40 +03:00
crumblink('DMARC', '#dmarc'),
'Aggregate reporting summary',
2023-01-30 16:27:06 +03:00
),
dom.p('DMARC reports are periodically sent by other mail servers that received an email message with a "From" header with our domain. Domains can have a DMARC DNS record that asks other mail servers to send these aggregate reports for analysis.'),
renderDMARCSummaries(summaries),
)
}
const renderDMARCSummaries = (summaries) => {
return [
dom.p('Below a summary of DMARC aggregate reporting results for the past 30 days.'),
summaries.length === 0 ? dom.div(box(yellow, 'No domains with reports.')) :
dom('table',
dom.thead(
dom.tr(
dom.th('Domain', attr({title: 'Domain to which the DMARC policy applied. If example.com has a DMARC policy, and email is sent with a From-header with subdomain.example.com, and there is no DMARC record for that subdomain, but there is one for example.com, then the DMARC policy of example.com applies and reports are sent for that that domain.'})),
dom.th('Messages', attr({title: 'Total number of messages that had the DMARC policy applied and reported. Actual messages sent is likely higher because not all email servers send DMARC aggregate reports, or perform DMARC checks at all.'})),
dom.th('DMARC "quarantine"/"reject"', attr({title: 'Messages for which policy was to mark them as spam (quarantine) or reject them during SMTP delivery.'})),
dom.th('DKIM "fail"', attr({title: 'Messages with a failing DKIM check. This can happen when sending through a mailing list where that list keeps your address in the message From-header but also strips DKIM-Signature headers in the message. DMARC evaluation passes if either DKIM passes or SPF passes.'})),
dom.th('SPF "fail"', attr({title: 'Message with a failing SPF check. This can happen with email forwarding and with mailing list. Other mail servers have sent email with this domain in the message From-header. DMARC evaluation passes if at least SPF or DKIM passes.'})),
dom.th('Policy overrides', attr({title: 'Mail servers can override the DMARC policy. E.g. a mail server may be able to detect emails coming from mailing lists that do not pass DMARC and would have to be rejected, but for which an override has been configured.'})),
)
),
dom.tbody(
summaries.map(r =>
dom.tr(
dom.td(dom.a(attr({href: '#domains/' + r.Domain + '/dmarc', title: 'See report details.'}), r.Domain)),
dom.td(style({textAlign: 'right'}), '' + r.Total),
dom.td(style({textAlign: 'right'}), r.DispositionQuarantine === 0 & & r.DispositionReject === 0 ? '0/0' : box(red, '' + r.DispositionQuarantine + '/' + r.DispositionReject)),
dom.td(style({textAlign: 'right'}), box(r.DKIMFail === 0 ? green : red, '' + r.DKIMFail)),
dom.td(style({textAlign: 'right'}), box(r.SPFFail === 0 ? green : red, '' + r.SPFFail)),
dom.td(!r.PolicyOverrides ? [] : Object.entries(r.PolicyOverrides).map(kv => kv[0] + ': ' + kv[1]).join('; ')),
)
),
),
)
]
}
2023-11-01 19:55:40 +03:00
const dmarcEvaluations = async () => {
const evalStats = await api.DMARCEvaluationStats()
const isEmpty = (o) => {
for (const e in o) {
return false
}
return true
}
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('DMARC', '#dmarc'),
'Evaluations',
),
dom.p('Incoming messages are checked against the DMARC policy of the domain in the message From header. If the policy requests reporting on the resulting evaluations, they are stored in the database. Each interval of 1 to 24 hours, the evaluations may be sent to a reporting address specified in the domain\'s DMARC policy. Not all evaluations are a reason to send a report, but if a report is sent all evaluations are included.'),
dom.table(
dom.thead(
dom.tr(
dom.th('Domain', attr({title: 'Domain in the message From header. Keep in mind these can be forged, so this does not necessarily mean someone from this domain authentically tried delivering email.'})),
dom.th('Evaluations', attr({title: 'Total number of message delivery attempts, including retries.'})),
dom.th('Send report', attr({title: 'Whether the current evaluations will cause a report to be sent.'})),
),
),
dom.tbody(
Object.entries(evalStats).sort((a, b) => a[0] < b [ 0 ] ? -1 : 1 ) . map ( t = >
dom.tr(
dom.td(dom.a(attr({href: '#dmarc/evaluations/'+domainName(t[1].Domain)}), domainString(t[1].Domain))),
dom.td(style({textAlign: 'right'}), ''+t[1].Count),
dom.td(style({textAlign: 'right'}), t[1].SendReport ? '✓' : ''),
),
),
isEmpty(evalStats) ? dom.tr(dom.td(attr({colspan: '3'}), 'No evaluations.')) : [],
),
),
)
}
const dmarcEvaluationsDomain = async (domain) => {
const [d, evaluations] = await api.DMARCEvaluationsDomain(domain)
let lastInterval = ''
let lastAddresses = ''
const formatPolicy = (e) => {
const p = e.PolicyPublished
let s = ''
const add = (k, v) => {
if (v) {
s += k+'='+v+'; '
}
}
add('p', p.Policy)
add('sp', p.SubdomainPolicy)
add('adkim', p.ADKIM)
add('aspf', p.ASPF)
add('pct', ''+p.Percentage)
add('fo', ''+p.ReportingOptions)
return s
}
let lastPolicy = ''
const authStatus = (v) => inlineBox(v ? '' : yellow, v ? 'pass' : 'fail')
const formatDKIMResults = (results) => results.map(r => dom.div('selector '+r.Selector+(r.Domain !== domain ? ', domain '+r.Domain : '') + ': ', inlineBox(r.Result === "pass" ? '' : yellow, r.Result)))
2023-11-02 19:54:24 +03:00
const formatSPFResults = (alignedpass, results) => results.map(r => dom.div(''+r.Scope+(r.Domain !== domain ? ', domain '+r.Domain : '') + ': ', inlineBox(r.Result === "pass" & & alignedpass ? '' : yellow, r.Result)))
2023-11-01 19:55:40 +03:00
const sourceIP = (ip) => {
const r = dom.span(ip, attr({title: 'Click to do a reverse lookup of the IP.'}), style({cursor: 'pointer'}), async function click(e) {
e.preventDefault()
try {
const rev = await api.LookupIP(ip)
r.innerText = ip + '\n' + rev.Hostnames.join('\n')
} catch (err) {
r.innerText = ip + '\nerror: ' +err.message
}
})
return r
}
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('DMARC', '#dmarc'),
crumblink('Evaluations', '#dmarc/evaluations'),
'Domain '+domainString(d),
),
dom.div(
dom.button('Remove evaluations', async function click(e) {
e.target.disabled = true
try {
await api.DMARCRemoveEvaluations(domain)
window.location.reload() // todo: only clear the table?
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
} finally {
e.target.disabled = false
}
}),
),
dom.br(),
dom.p('The evaluations below will be sent in a DMARC aggregate report to the addresses found in the published DMARC DNS record, which is fetched again before sending the report. The fields Interval hours, Addresses and Policy are only filled for the first row and whenever a new value in the published DMARC record is encountered.'),
dom.table(
dom.thead(
dom.tr(
dom.th('ID'),
dom.th('Evaluated'),
dom.th('Optional', attr({title: 'Some evaluations will not cause a DMARC aggregate report to be sent. But if a report is sent, optional records are included.'})),
dom.th('Interval hours', attr({title: 'DMARC policies published by a domain can specify how often they would like to receive reports. The default is 24 hours, but can be as often as each hour. To keep reports comparable between different mail servers that send reports, reports are sent at rounded up intervals of whole hours that can divide a 24 hour day, and are aligned with the start of a day at UTC.'})),
dom.th('Addresses', attr({title: 'Addresses that will receive the report. An address can have a maximum report size configured. If there is no address, no report will be sent.'})),
dom.th('Policy', attr({title: 'Summary of the policy as encountered in the DMARC DNS record of the domain, and used for evaluation.'})),
dom.th('IP', attr({title: 'IP address of delivery attempt that was evaluated, relevant for SPF.'})),
dom.th('Disposition', attr({title: 'Our decision to accept/reject this message. It may be different than requested by the published policy. For example, when overriding due to delivery from a mailing list or forwarded address.'})),
2023-11-02 19:54:24 +03:00
dom.th('Aligned DKIM/SPF', attr({title: 'Whether DKIM and SPF had an aligned pass, where strict/relaxed alignment means whether the domain of an SPF pass and DKIM pass matches the exact domain (strict) or optionally a subdomain (relaxed). A DMARC pass requires at least one pass.'})),
2023-11-01 19:55:40 +03:00
dom.th('Envelope to', attr({title: 'Domain used in SMTP RCPT TO during delivery.'})),
dom.th('Envelope from', attr({title: 'Domain used in SMTP MAIL FROM during delivery.'})),
dom.th('Message from', attr({title: 'Domain in "From" message header.'})),
dom.th('DKIM details', attr({title: 'Results of verifying DKIM-Signature headers in message. Only signatures with matching organizational domain are included, regardless of strict/relaxed DKIM alignment in DMARC policy.'})),
dom.th('SPF details', attr({title: 'Results of SPF check used in DMARC evaluation. "mfrom" indicates the "SMTP MAIL FROM" domain was used, "helo" indicates the SMTP EHLO domain was used.'})),
),
),
dom.tbody(
evaluations.map(e => {
const ival = e.IntervalHours + 'h'
const interval = ival === lastInterval ? '' : ival
lastInterval = ival
const a = (e.Addresses || []).join('\n')
const addresses = a === lastAddresses ? '' : a
lastAddresses = a
const p = formatPolicy(e)
const policy = p === lastPolicy ? '' : p
lastPolicy = p
return dom.tr(
dom.td(''+e.ID),
dom.td(new Date(e.Evaluated).toUTCString()),
dom.td(e.Optional ? 'Yes' : ''),
dom.td(interval),
dom.td(addresses),
dom.td(policy),
dom.td(sourceIP(e.SourceIP)),
2023-11-02 19:54:24 +03:00
dom.td(inlineBox(e.Disposition === 'none' ? '' : red, e.Disposition), (e.OverrideReasons || []).length > 0 ? ' ('+e.OverrideReasons.map(r => r.Type).join(', ')+')' : ''),
2023-11-01 19:55:40 +03:00
dom.td(authStatus(e.AlignedDKIMPass), '/', authStatus(e.AlignedSPFPass)),
dom.td(e.EnvelopeTo),
dom.td(e.EnvelopeFrom),
dom.td(e.HeaderFrom),
dom.td(formatDKIMResults(e.DKIMResults || [])),
2023-11-02 19:54:24 +03:00
dom.td(formatSPFResults(e.AlignedSPFPass, e.SPFResults || [])),
2023-11-01 19:55:40 +03:00
)
}),
evaluations.length === 0 ? dom.tr(dom.td(attr({colspan: '14'}), 'No evaluations.')) : [],
),
),
)
}
2023-01-30 16:27:06 +03:00
const utcDate = (dt) => new Date(Date.UTC(dt.getUTCFullYear(), dt.getUTCMonth(), dt.getUTCDate(), dt.getUTCHours(), dt.getUTCMinutes(), dt.getUTCSeconds()))
const utcDateStr = (dt) => [dt.getUTCFullYear(), 1+dt.getUTCMonth(), dt.getUTCDate()].join('-')
const isDayChange = (dt) => utcDateStr(new Date(dt.getTime() - 2*60*1000)) !== utcDateStr(new Date(dt.getTime() + 2*60*1000))
const period = (start, end) => {
const beginUTC = utcDate(start)
const endUTC = utcDate(end)
const beginDayChange = isDayChange(beginUTC)
const endDayChange = isDayChange(endUTC)
let beginstr = utcDateStr(beginUTC)
let endstr = utcDateStr(endUTC)
const title = attr({title: '' + beginUTC.toISOString() + ' - ' + endUTC.toISOString()})
if (beginDayChange & & endDayChange & & Math.abs(beginUTC.getTime() - endUTC.getTime()) < 24 * ( 2 * 60 + 3600 ) * 1000 ) {
return dom.span(beginstr, title)
}
const pad = v => v < 10 ? ' 0 ' + v : ' ' + v
if (!beginDayChange) {
beginstr += ' '+pad(beginUTC.getUTCHours()) + ':' + pad(beginUTC.getUTCMinutes())
}
if (!endDayChange) {
endstr += ' '+pad(endUTC.getUTCHours()) + ':' + pad(endUTC.getUTCMinutes())
}
return dom.span(beginstr + ' - ' + endstr, title)
}
const domainDMARC = async (d) => {
const end = new Date().toISOString()
const start = new Date(new Date().getTime() - 30*24*3600*1000).toISOString()
const [reports, dnsdomain] = await Promise.all([
api.DMARCReports(start, end, d),
api.Domain(d),
])
// todo future: table sorting? period selection (last day, 7 days, 1 month, 1 year, custom period)? collapse rows for a report? show totals per report? a simple bar graph to visualize messages and dmarc/dkim/spf fails? similar for TLSRPT.
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('Domain ' + domainString(dnsdomain), '#domains/'+d),
'DMARC aggregate reports',
),
dom.p('DMARC reports are periodically sent by other mail servers that received an email message with a "From" header with our domain. Domains can have a DMARC DNS record that asks other mail servers to send these aggregate reports for analysis.'),
dom.p('Below the DMARC aggregate reports for the past 30 days.'),
reports.length === 0 ? dom.div('No DMARC reports for domain.') :
dom.table(
dom.thead(
dom.tr(
dom.th('ID'),
dom.th('Organisation', attr({title: 'Organization that sent the DMARC report.'})),
dom.th('Period (UTC)', attr({title: 'Period this reporting period is about. Mail servers are recommended to stick to whole UTC days.'})),
dom.th('Policy', attr({title: 'The DMARC policy that the remote mail server had fetched and applied to the message. A policy that changed during the reporting period may result in unexpected policy evaluations.'})),
dom.th('Source IP', attr({title: 'Remote IP address of session at remote mail server.'})),
dom.th('Messages', attr({title: 'Total messages that the results apply to.'})),
dom.th('Result', attr({title: 'DMARC evaluation result.'})),
dom.th('ADKIM', attr({title: 'DKIM alignment. For a pass, one of the DKIM signatures that pass must be strict/relaxed-aligned with the domain, as specified by the policy.'})),
dom.th('ASPF', attr({title: 'SPF alignment. For a pass, the SPF policy must pass and be strict/relaxed-aligned with the domain, as specified by the policy.'})),
dom.th('SMTP to', attr({title: 'Domain of destination address, as specified during the SMTP session.'})),
dom.th('SMTP from', attr({title: 'Domain of originating address, as specified during the SMTP session.'})),
dom.th('Header from', attr({title: 'Domain of address in From-header of message.'})),
dom.th('Auth Results', attr({title: 'Details of DKIM and/or SPF authentication results. DMARC requires at least one aligned DKIM or SPF pass.'})),
),
),
dom.tbody(
reports.map(r => {
const m = r.ReportMetadata
let policy = []
if (r.PolicyPublished.Domain !== d) {
policy.push(r.PolicyPublished.Domain)
}
const alignments = {'r': 'relaxed', 's': 'strict'}
if (r.PolicyPublished.ADKIM !== '') {
policy.push('dkim '+(alignments[r.PolicyPublished.ADKIM] || r.PolicyPublished.ADKIM))
}
if (r.PolicyPublished.ASPF !== '') {
policy.push('spf '+(alignments[r.PolicyPublished.ASPF] || r.PolicyPublished.ASPF))
}
if (r.PolicyPublished.Policy !== '') {
policy.push('policy '+r.PolicyPublished.Policy)
}
if (r.PolicyPublished.SubdomainPolicy !== '' & & r.PolicyPublished.SubdomainPolicy !== r.PolicyPublished.Policy) {
policy.push('subdomain '+r.PolicyPublished.SubdomainPolicy)
}
if (r.PolicyPublished.Percentage !== 100) {
policy.push('' + r.PolicyPublished.Percentage + '%')
}
const sourceIP = (ip) => {
const r = dom.span(ip, attr({title: 'Click to do a reverse lookup of the IP.'}), style({cursor: 'pointer'}), async function click(e) {
e.preventDefault()
try {
const rev = await api.LookupIP(ip)
r.innerText = ip + '\n' + rev.Hostnames.join('\n')
} catch (err) {
r.innerText = ip + '\nerror: ' +err.message
}
})
return r
}
let authResults = 0
for (const record of r.Records) {
authResults += (record.AuthResults.DKIM || []).length
authResults += (record.AuthResults.SPF || []).length
}
const reportRowspan = attr({rowspan: '' + authResults})
return r.Records.map((record, recordIndex) => {
const row = record.Row
const pol = row.PolicyEvaluated
const ids = record.Identifiers
const dkims = record.AuthResults.DKIM || []
const spfs = record.AuthResults.SPF || []
const recordRowspan = attr({rowspan: '' + (dkims.length+spfs.length)})
const valignTop = style({verticalAlign: 'top'})
const dmarcStatuses = {
none: 'DMARC checks or were not applied. This does not mean these messages are definitely not spam though, and they may have been rejected based on other checks, such as reputation or content-based filters.',
quarantine: 'DMARC policy is to mark message as spam.',
reject: 'DMARC policy is to reject the message during SMTP delivery.',
}
const rows = []
const addRow = (...last) => {
const tr = dom.tr(
recordIndex > 0 || rows.length > 0 ? [] : [
dom.td(reportRowspan, valignTop, dom.a('' + r.ID, attr({href: '#domains/' + d + '/dmarc/' + r.ID, title: 'View raw report.'}))),
dom.td(reportRowspan, valignTop, m.OrgName, attr({title: 'Email: ' + m.Email + ', ReportID: ' + m.ReportID})),
dom.td(reportRowspan, valignTop, period(new Date(m.DateRange.Begin*1000), new Date(m.DateRange.End*1000)), m.Errors & & m.Errors.length ? dom.span('errors', attr({title: m.Errors.join('; ')})) : []),
dom.td(reportRowspan, valignTop, policy.join(', ')),
],
rows.length > 0 ? [] : [
dom.td(recordRowspan, valignTop, sourceIP(row.SourceIP)),
dom.td(recordRowspan, valignTop, '' + row.Count),
dom.td(recordRowspan, valignTop,
dom.span(pol.Disposition === 'none' ? 'none' : box(red, pol.Disposition), attr({title: pol.Disposition + ': ' + dmarcStatuses[pol.Disposition]})),
(pol.Reasons || []).map(reason => [dom.br(), dom.span(reason.Type + (reason.Comment ? ' (' + reason.Comment + ')' : ''), attr({title: 'Policy was overridden by remote mail server for this reasons.'}))]),
),
dom.td(recordRowspan, valignTop, pol.DKIM === 'pass' ? 'pass' : box(yellow, dom.span(pol.DKIM, attr({title: 'No or no valid DKIM-signature is present that is "aligned" with the domain name.'})))),
dom.td(recordRowspan, valignTop, pol.SPF === 'pass' ? 'pass' : box(yellow, dom.span(pol.SPF, attr({title: 'No SPF policy was found, or IP is not allowed by policy, or domain name is not "aligned" with the domain name.'})))),
dom.td(recordRowspan, valignTop, ids.EnvelopeTo),
dom.td(recordRowspan, valignTop, ids.EnvelopeFrom),
dom.td(recordRowspan, valignTop, ids.HeaderFrom),
],
dom.td(last),
)
rows.push(tr)
}
for (const dkim of dkims) {
const statuses = {
none: 'Message was not signed',
pass: 'Message was signed and signature was verified.',
fail: 'Message was signed, but signature was invalid.',
policy: 'Message was signed, but signature is not accepted by policy.',
neutral: 'Message was signed, but the signature contains an error or could not be processed. This status is also used for errors not covered by other statuses.',
temperror: 'Message could not be verified. E.g. because of DNS resolve error. A later attempt may succeed. A missing DNS record is treated as temporary error, a new key may not have propagated through DNS shortly after it was taken into use.',
permerror: 'Message cannot be verified. E.g. when a required header field is absent or for invalid (combination of) parameters. We typically set this if a DNS record does not allow the signature, e.g. due to algorithm mismatch or expiry.',
}
const dkimOK = {none: true, pass: true}
addRow(
'dkim: ',
dom.span(dkimOK[dkim.Result] ? dkim.Result : box(yellow, dkim.Result), attr({title: (dkim.HumanResult ? 'additional information: ' + dkim.HumanResult + ';\n' : '') + dkim.Result + ': ' + (statuses[dkim.Result] || 'invalid status')})),
!dkim.Selector ? [] : [
', ',
dom.span(dkim.Selector, attr({title: 'Selector, the DKIM record is at "< selector > ._domainkey.< domain > ".' + (dkim.Domain === d ? '' : ';\ndomain: ' + dkim.Domain)})),
]
)
}
for (const spf of spfs) {
const statuses = {
none: 'No SPF policy found.',
neutral: 'Policy states nothing about IP, typically due to "?" qualifier in SPF record.',
pass: 'IP is authorized.',
fail: 'IP is explicitly not authorized, due to "-" qualifier in SPF record.',
softfail: 'Weak statement that IP is probably not authorized, "~" qualifier in SPF record.',
temperror: 'Trying again later may succeed, e.g. for temporary DNS lookup error.',
permerror: 'Error requiring some intervention to correct. E.g. invalid DNS record.',
}
const spfOK = {none: true, neutral: true, pass: true}
addRow(
'spf: ',
dom.span(spfOK[spf.Result] ? spf.Result : box(yellow, spf.Result), attr({title: spf.Result + ': ' + (statuses[spf.Result] || 'invalid status')})),
', ',
dom.span(spf.Scope, attr({title: 'scopes:\nhelo: "SMTP HELO"\nmfrom: SMTP "MAIL FROM"'})),
' ',
dom.span(spf.Domain),
)
}
return rows
})
}),
),
)
)
}
const domainDMARCReport = async (d, reportID) => {
const [report, dnsdomain] = await Promise.all([
api.DMARCReportID(d, reportID),
api.Domain(d),
])
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('Domain ' + domainString(dnsdomain), '#domains/'+d),
crumblink('DMARC aggregate reports', '#domains/' + d + '/dmarc'),
'Report ' + reportID
),
dom.p('Below is the raw report as received from the remote mail server.'),
dom('div.literal', JSON.stringify(report, null, '\t')),
)
}
const tlsrpt = async () => {
const end = new Date().toISOString()
const start = new Date(new Date().getTime() - 30*24*3600*1000).toISOString()
const summaries = await api.TLSRPTSummaries(start, end, '')
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'TLS reports (TLSRPT)',
),
dom.p('TLSRPT (TLS reporting) is a mechanism to request feedback from other mail servers about TLS connections to your mail server. If is typically used along with MTA-STS and/or DANE to enforce that SMTP connections are protected with TLS. Mail servers implementing TLSRPT will typically send a daily report with both successful and failed connection counts, including details about failures.'),
renderTLSRPTSummaries(summaries)
)
}
const renderTLSRPTSummaries = (summaries) => {
return [
dom.p('Below a summary of TLS reports for the past 30 days.'),
summaries.length === 0 ? dom.div(box(yellow, 'No domains with TLS reports.')) :
dom.table(
dom.thead(
dom.tr(
dom.th('Domain', attr({title: ''})),
dom.th('Successes', attr({title: ''})),
dom.th('Failures', attr({title: ''})),
dom.th('Failure details', attr({title: ''})),
)
),
dom.tbody(
summaries.map(r =>
dom.tr(
dom.td(dom.a(attr({href: '#domains/' + r.Domain + '/tlsrpt', title: 'See report details.'}), r.Domain)),
dom.td(style({textAlign: 'right'}), '' + r.Success),
dom.td(style({textAlign: 'right'}), '' + r.Failure),
dom.td(!r.ResultTypeCounts ? [] : Object.entries(r.ResultTypeCounts).map(kv => kv[0] + ': ' + kv[1]).join('; ')),
)
),
),
)
]
}
const domainTLSRPT = async (d) => {
const end = new Date().toISOString()
const start = new Date(new Date().getTime() - 30*24*3600*1000).toISOString()
const [records, dnsdomain] = await Promise.all([
api.TLSReports(start, end, d),
api.Domain(d),
])
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('Domain ' + domainString(dnsdomain), '#domains/'+d),
'TLSRPT',
),
dom.p('TLSRPT (TLS reporting) is a mechanism to request feedback from other mail servers about TLS connections to your mail server. If is typically used along with MTA-STS and/or DANE to enforce that SMTP connections are protected with TLS. Mail servers implementing TLSRPT will typically send a daily report with both successful and failed connection counts, including details about failures.'),
dom.p('Below the TLS reports for the past 30 days.'),
records.length === 0 ? dom.div('No TLS reports for domain.') :
dom.table(
dom.thead(
dom.tr(
dom.th('Report', attr({colspan: '3'})),
dom.th('Policy', attr({colspan: '3'})),
dom.th('Failure Details', attr({colspan: '8'})),
),
dom.tr(
dom.th('ID'),
dom.th('From', attr({title: 'SMTP mail from from which we received the report.'})),
dom.th('Period (UTC)', attr({title: 'Period this reporting period is about. Mail servers are recommended to stick to whole UTC days.'})),
dom.th('Policy', attr({title: 'The policy applied, typically STSv1.'})),
dom.th('Successes', attr({title: 'Total number of successful TLS connections for policy.'})),
dom.th('Failures', attr({title: 'Total number of failed TLS connections for policy.'})),
dom.th('Result Type', attr({title: 'Type of failure.'})),
dom.th('Sending MTA', attr({title: 'IP of sending MTA.'})),
dom.th('Receiving MX Host'),
dom.th('Receiving MX HELO'),
dom.th('Receiving IP'),
dom.th('Count', attr({title: 'Number of TLS connections that failed with these details.'})),
dom.th('More', attr({title: 'Optional additional information about the failure.'})),
dom.th('Code', attr({title: 'Optional API error code relating to the failure.'})),
),
),
dom.tbody(
records.map(record => {
const r = record.Report
const reportRowSpan = attr({rowspan: ''+r.policies.length})
const valignTop = style({verticalAlign: 'top'})
const alignRight = style({textAlign: 'right'})
return r.policies.map((result, index) => {
const rows = []
const details = result['failure-details'] || []
const resultRowSpan = attr({rowspan: ''+(details.length || 1)})
const addRow = (d) => {
const row = dom.tr(
index > 0 || rows.length > 0 ? [] : [
dom.td(reportRowSpan, valignTop, dom.a(''+record.ID, attr({href: '#domains/' + record.Domain + '/tlsrpt/'+record.ID}))),
dom.td(reportRowSpan, valignTop, r['organization-name'] || r['contact-info'] || record.MailFrom || '', attr({title: 'Organization: ' +r['organization-name'] + '; \nContact info: ' + r['contact-info'] + '; \nReport ID: ' + r['report-id'] + '; \nMail from: ' + record.MailFrom, })),
dom.td(reportRowSpan, valignTop, period(new Date(r['date-range']['start-datetime']), new Date(r['date-range']['end-datetime']))),
],
index > 0 ? [] : [
dom.td(resultRowSpan, valignTop, '' + result.policy['policy-type']+': '+((result.policy['policy-string'] || []).filter(s => s.startsWith('mode:'))[0] || '(no policy)').replace('mode:', '').trim(), attr({title: (result.policy['policy-string'] || []).join('\n')})),
dom.td(resultRowSpan, valignTop, alignRight, '' + result.summary['total-successful-session-count']),
dom.td(resultRowSpan, valignTop, alignRight, '' + result.summary['total-failure-session-count']),
],
!d ? dom.td(attr({colspan: '8'})) : [
dom.td(d['result-type']),
dom.td(d['sending-mta-ip']),
dom.td(d['receiving-mx-hostname']),
dom.td(d['receiving-mx-helo']),
dom.td(d['receiving-ip']),
dom.td(alignRight, '' + d['failed-session-count']),
dom.td(d['additional-information']),
dom.td(d['failure-reason-code']),
],
)
rows.push(row)
}
for (const d of details) {
addRow(d)
}
if (!details.length) {
addRow()
}
return rows
})
})
),
)
)
}
const domainTLSRPTID = async (d, reportID) => {
const [report, dnsdomain] = await Promise.all([
api.TLSReportID(d, reportID),
api.Domain(d),
])
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
crumblink('Domain ' + domainString(dnsdomain), '#domains/'+d),
crumblink('TLS report', '#domains/' + d + '/tlsrpt'),
'Report ' + reportID
),
dom.p('Below is the raw report as received from the remote mail server.'),
dom('div.literal', JSON.stringify(report, null, '\t')),
)
}
const mtasts = async () => {
const policies = await api.MTASTSPolicies()
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'MTA-STS policies',
),
2023-02-01 23:53:43 +03:00
dom.p("MTA-STS is a mechanism allowing email domains to publish a policy for using SMTP STARTTLS and TLS verification. See ", link('https://www.rfc-editor.org/rfc/rfc8461.html', 'RFC 8461'), '.'),
2023-01-30 16:27:06 +03:00
dom.p("The SMTP protocol is unencrypted by default, though the SMTP STARTTLS command is typically used to enable TLS on a connection. However, MTA's using STARTTLS typically do not validate the TLS certificate. An MTA-STS policy can specify that validation of host name, non-expiration and webpki trust is required."),
makeMTASTSTable(policies),
)
}
const formatMTASTSMX = (mx) => {
return (mx || []).map(e => {
return (e.Wildcard ? '*.' : '') + e.Domain.ASCII
}).join(', ')
}
const makeMTASTSTable = items => {
if (!items || !items.length) {
return dom.div('No data')
}
// Elements: Field name in JSON, column name override, title for column name.
const keys = [
["LastUse", "", "Last time this policy was used."],
["Domain", "Domain", "Domain this policy was retrieved from and this policy applies to."],
["Backoff", "", "If true, a DNS record for MTA-STS exists, but a policy could not be fetched. This indicates a failure with MTA-STS."],
["RecordID", "", "Unique ID for this policy. Each time a domain changes its policy, it must also change the record ID that is published in DNS to propagate the change."],
["Version", "", "For valid MTA-STS policies, this must be 'STSv1'."],
["Mode", "", "'enforce': TLS must be used and certificates must be validated; 'none': TLS and certificate validation is not required, typically only useful for removing once-used MTA-STS; 'testing': TLS should be used and certificated should be validated, but fallback to unverified TLS or plain text is allowed, but such cases must be reported"],
["MX", "", "The MX hosts that are configured to do TLS. If TLS and validation is required, but an MX host is not on this list, delivery will not be attempted to that host."],
["MaxAgeSeconds", "", "How long a policy can be cached and reused after it was fetched. Typically in the order of weeks."],
["Extensions", "", "Free-form extensions in the MTA-STS policy."],
["ValidEnd", "", "Until when this cached policy is valid, based on time the policy was fetched and the policy max age. Non-failure policies are automatically refreshed before they become invalid."],
["LastUpdate", "", "Last time this policy was updated."],
["Inserted", "", "Time when the policy was first inserted."],
]
const nowSecs = new Date().getTime()/1000
return dom.table(
dom.thead(
dom.tr(keys.map(kt => dom.th(dom.span(attr({title: kt[2]}), kt[1] || kt[0])))),
),
dom.tbody(
items.map(item =>
dom.tr(
keys.map(kt => {
const k = kt[0]
let v = ''
switch (k) {
case 'MX':
v = formatMTASTSMX(item[k])
break
case 'Inserted':
case 'ValidEnd':
case 'LastUpdate':
case 'LastUse':
v = age(new Date(item[k]), k === 'ValidEnd', nowSecs)
break
default:
if (item[k] !== null) {
v = ''+item[k]
}
}
return dom.td(v)
})
)
),
),
)
}
const dnsbl = async () => {
const ipZoneResults = await api.DNSBLStatus()
const url = (ip) => {
return 'https://multirbl.valli.org/lookup/' + encodeURIComponent(ip) + '.html'
}
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'DNS blocklist status for IPs',
),
dom.p('Follow the external links to a third party DNSBL checker to see if the IP is on one of the many blocklist.'),
dom.ul(
Object.entries(ipZoneResults).sort().map(ipZones => {
const [ip, zoneResults] = ipZones
return dom.li(
2023-02-01 23:53:43 +03:00
link(url(ip), ip),
2023-01-30 16:27:06 +03:00
!ipZones.length ? [] : dom.ul(
Object.entries(zoneResults).sort().map(zoneResult =>
dom.li(
zoneResult[0] + ': ',
zoneResult[1] === 'pass' ? 'pass' : box(red, zoneResult[1]),
),
),
),
)
})
),
!Object.entries(ipZoneResults).length ? box(red, 'No IPs found.') : [],
)
}
const queueList = async () => {
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
const [msgs, transports] = await Promise.all([
api.QueueList(),
api.Transports(),
])
2023-01-30 16:27:06 +03:00
const nowSecs = new Date().getTime()/1000
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'Queue',
),
msgs.length === 0 ? 'Currently no messages in the queue.' : [
dom.p('The messages below are currently in the queue.'),
// todo: sorting by address/timestamps/attempts. perhaps filtering.
dom.table(
dom.thead(
dom.tr(
dom.th('ID'),
dom.th('Submitted'),
dom.th('From'),
dom.th('To'),
dom.th('Size'),
dom.th('Attempts'),
dom.th('Next attempt'),
dom.th('Last attempt'),
dom.th('Last error'),
implement "requiretls", rfc 8689
with requiretls, the tls verification mode/rules for email deliveries can be
changed by the sender/submitter. in two ways:
1. "requiretls" smtp extension to always enforce verified tls (with mta-sts or
dnssec+dane), along the entire delivery path until delivery into the final
destination mailbox (so entire transport is verified-tls-protected).
2. "tls-required: no" message header, to ignore any tls and tls verification
errors even if the recipient domain has a policy that requires tls verification
(mta-sts and/or dnssec+dane), allowing delivery of non-sensitive messages in
case of misconfiguration/interoperability issues (at least useful for sending
tls reports).
we enable requiretls by default (only when tls is active), for smtp and
submission. it can be disabled through the config.
for each delivery attempt, we now store (per recipient domain, in the account
of the sender) whether the smtp server supports starttls and requiretls. this
support is shown (after having sent a first message) in the webmail when
sending a message (the previous 3 bars under the address input field are now 5
bars, the first for starttls support, the last for requiretls support). when
all recipient domains for a message are known to implement requiretls,
requiretls is automatically selected for sending (instead of "default" tls
behaviour). users can also select the "fallback to insecure" to add the
"tls-required: no" header.
new metrics are added for insight into requiretls errors and (some, not yet
all) cases where tls-required-no ignored a tls/verification error.
the admin can change the requiretls status for messages in the queue. so with
default delivery attempts, when verified tls is required by failing, an admin
could potentially change the field to "tls-required: no"-behaviour.
messages received (over smtp) with the requiretls option, get a comment added
to their Received header line, just before "id", after "with".
2023-10-24 11:06:16 +03:00
dom.th('Require TLS'),
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
dom.th('Transport/Retry'),
dom.th('Remove'),
2023-01-30 16:27:06 +03:00
),
),
dom.tbody(
implement "requiretls", rfc 8689
with requiretls, the tls verification mode/rules for email deliveries can be
changed by the sender/submitter. in two ways:
1. "requiretls" smtp extension to always enforce verified tls (with mta-sts or
dnssec+dane), along the entire delivery path until delivery into the final
destination mailbox (so entire transport is verified-tls-protected).
2. "tls-required: no" message header, to ignore any tls and tls verification
errors even if the recipient domain has a policy that requires tls verification
(mta-sts and/or dnssec+dane), allowing delivery of non-sensitive messages in
case of misconfiguration/interoperability issues (at least useful for sending
tls reports).
we enable requiretls by default (only when tls is active), for smtp and
submission. it can be disabled through the config.
for each delivery attempt, we now store (per recipient domain, in the account
of the sender) whether the smtp server supports starttls and requiretls. this
support is shown (after having sent a first message) in the webmail when
sending a message (the previous 3 bars under the address input field are now 5
bars, the first for starttls support, the last for requiretls support). when
all recipient domains for a message are known to implement requiretls,
requiretls is automatically selected for sending (instead of "default" tls
behaviour). users can also select the "fallback to insecure" to add the
"tls-required: no" header.
new metrics are added for insight into requiretls errors and (some, not yet
all) cases where tls-required-no ignored a tls/verification error.
the admin can change the requiretls status for messages in the queue. so with
default delivery attempts, when verified tls is required by failing, an admin
could potentially change the field to "tls-required: no"-behaviour.
messages received (over smtp) with the requiretls option, get a comment added
to their Received header line, just before "id", after "with".
2023-10-24 11:06:16 +03:00
msgs.map(m => {
let requiretls, requiretlsFieldset, transport
return dom.tr(
dom.td(''+m.ID),
dom.td(age(new Date(m.Queued), false, nowSecs)),
dom.td(m.SenderLocalpart+"@"+ipdomainString(m.SenderDomain)), // todo: escaping of localpart
dom.td(m.RecipientLocalpart+"@"+ipdomainString(m.RecipientDomain)), // todo: escaping of localpart
dom.td(formatSize(m.Size)),
dom.td(''+m.Attempts),
dom.td(age(new Date(m.NextAttempt), true, nowSecs)),
dom.td(m.LastAttempt ? age(new Date(m.LastAttempt), false, nowSecs) : '-'),
dom.td(m.LastError || '-'),
dom.td(
dom.form(
requiretlsFieldset=dom.fieldset(
requiretls=dom.select(
attr({title: 'How to use TLS for message delivery over SMTP:\n\nDefault: Delivery attempts follow the policies published by the recipient domain: Verification with MTA-STS and/or DANE, or optional opportunistic unverified STARTTLS if the domain does not specify a policy.\n\nWith RequireTLS: For sensitive messages, you may want to require verified TLS. The recipient destination domain SMTP server must support the REQUIRETLS SMTP extension for delivery to succeed. It is automatically chosen when the destination domain mail servers of all recipients are known to support it.\n\nFallback to insecure: If delivery fails due to MTA-STS and/or DANE policies specified by the recipient domain, and the content is not sensitive, you may choose to ignore the recipient domain TLS policies so delivery can succeed.'}),
dom.option('Default', attr({value: ''})),
dom.option('With RequireTLS', attr({value: 'yes'}), m.RequireTLS === true ? attr({selected: ''}) : []),
dom.option('Fallback to insecure', attr({value: 'no'}), m.RequireTLS === false ? attr({selected: ''}) : []),
),
' ',
dom.button('Save'),
),
async function submit(e) {
e.preventDefault()
try {
requiretlsFieldset.disabled = true
await api.QueueSaveRequireTLS(m.ID, requiretls.value === '' ? null : requiretls.value === 'yes')
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
} finally {
requiretlsFieldset.disabled = false
}
}
),
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.
other transports are:
- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
can be helpful if your ip is blocked, you need to get email out, and you have
another IP that isn't blocked.
keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.
which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.
routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.
we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.
for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 19:38:28 +03:00
),
implement "requiretls", rfc 8689
with requiretls, the tls verification mode/rules for email deliveries can be
changed by the sender/submitter. in two ways:
1. "requiretls" smtp extension to always enforce verified tls (with mta-sts or
dnssec+dane), along the entire delivery path until delivery into the final
destination mailbox (so entire transport is verified-tls-protected).
2. "tls-required: no" message header, to ignore any tls and tls verification
errors even if the recipient domain has a policy that requires tls verification
(mta-sts and/or dnssec+dane), allowing delivery of non-sensitive messages in
case of misconfiguration/interoperability issues (at least useful for sending
tls reports).
we enable requiretls by default (only when tls is active), for smtp and
submission. it can be disabled through the config.
for each delivery attempt, we now store (per recipient domain, in the account
of the sender) whether the smtp server supports starttls and requiretls. this
support is shown (after having sent a first message) in the webmail when
sending a message (the previous 3 bars under the address input field are now 5
bars, the first for starttls support, the last for requiretls support). when
all recipient domains for a message are known to implement requiretls,
requiretls is automatically selected for sending (instead of "default" tls
behaviour). users can also select the "fallback to insecure" to add the
"tls-required: no" header.
new metrics are added for insight into requiretls errors and (some, not yet
all) cases where tls-required-no ignored a tls/verification error.
the admin can change the requiretls status for messages in the queue. so with
default delivery attempts, when verified tls is required by failing, an admin
could potentially change the field to "tls-required: no"-behaviour.
messages received (over smtp) with the requiretls option, get a comment added
to their Received header line, just before "id", after "with".
2023-10-24 11:06:16 +03:00
dom.td(
dom.form(
transport=dom.select(
attr({title: 'Transport to use for delivery attempts. The default is direct delivery, connecting to the MX hosts of the domain.'}),
dom.option('(default)', attr({value: ''})),
Object.keys(transports).sort().map(t => dom.option(t, m.Transport === t ? attr({checked: ''}) : [])),
),
' ',
dom.button('Retry now'),
async function submit(e) {
e.preventDefault()
try {
e.target.disabled = true
await api.QueueKick(m.ID, transport.value)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
} finally {
e.target.disabled = false
}
window.location.reload() // todo: only refresh the list
}
),
),
dom.td(
dom.button('Remove', async function click(e) {
e.preventDefault()
if (!window.confirm('Are you sure you want to remove this message? It will be removed completely.')) {
return
}
try {
e.target.disabled = true
await api.QueueDrop(m.ID)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
return
} finally {
e.target.disabled = false
}
window.location.reload() // todo: only refresh the list
}),
),
)
})
2023-01-30 16:27:06 +03:00
),
),
],
)
}
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
const webserver = async () => {
let conf = await api.WebserverConfig()
// We disable this while saving the form.
let fieldset
// Keep track of redirects. Rows are objects that hold both the DOM and allows
// retrieving the visible (modified) data to construct a config for saving.
let redirectRows = []
let redirectsTbody
let noredirect
// Similar to redirects, but for web handlers.
let handlerRows = []
let handlersTbody
let nohandler
// Make a new redirect rows, adding it to the list. The caller typically uses this
// while building the DOM, the element is added because this object has it as
// "root" field.
const redirectRow = (t) => {
const row = {}
row.root = dom.tr(
dom.td(
row.from=dom.input(attr({required: '', value: domainName(t[0])})),
),
dom.td(
row.to=dom.input(attr({required: '', value: domainName(t[1])})),
),
dom.td(
dom.button('Remove', attr({type: 'button'}), function click(e) {
redirectRows = redirectRows.filter(r => r !== row)
row.root.remove()
noredirect.style.display = redirectRows.length ? 'none' : ''
}),
),
)
// "get" is the common function to retrieve the data from an object with a root field as DOM element.
row.get = () => [row.from.value, row.to.value]
redirectRows.push(row)
return row
}
// Reusable component for managing headers. Just a table with a header key and
// value. We can remove existing rows, and add new rows, and edit existing.
const makeHeaders = (h) => {
const r = {
rows: [],
}
let tbody, norow
const headerRow = (k, v) => {
const row = {}
row.root = dom.tr(
dom.td(
row.key=dom.input(attr({required: '', value: k})),
),
dom.td(
row.value=dom.input(attr({required: '', value: v})),
),
dom.td(
dom.button('Remove', attr({type: 'button'}), function click(e) {
r.rows = r.rows.filter(x => x !== row)
row.root.remove()
norow.style.display = r.rows.length ? 'none' : ''
})
),
)
r.rows.push(row)
row.get = () => [row.key.value, row.value.value]
return row
}
r.add = dom.button('Add', attr({type: 'button'}), function click(e) {
const row = headerRow('', '')
tbody.appendChild(row.root)
norow.style.display = r.rows.length ? 'none' : ''
})
r.root = dom.table(
tbody=dom.tbody(
Object.entries(h).sort().map(t => headerRow(t[0], t[1])),
norow=dom.tr(
style({display: r.rows.length ? 'none' : ''}),
dom.td(attr({colspan: 3}), 'None added.'),
)
),
)
r.get = () => Object.fromEntries(r.rows.map(row => row.get()))
return r
}
// todo: make a mechanism to get the ../config/config.go sconf-doc struct tags
// here. So we can use them for the titles, as documentation. Instead of current
// approach of copy/pasting those texts, inevitably will get out of date.
// todo: perhaps lay these out in the same way as in the config file? will help admins mentally map between the two. will take a bit more vertical screen space, but current approach looks messy/garbled. we could use that mechanism for more parts of the configuration file. we can even show the same sconf-doc struct tags. the html admin page will then just be a glorified guided text editor!
// Make a handler row. This is more complicated, since it can be one of the three
// types (static, redirect, forward), and can change between those types.
const handlerRow = (wh) => {
// We make and remember components for headers, possibly not used.
const row = {
staticHeaders: makeHeaders((wh.WebStatic || {}).ResponseHeaders || {}),
forwardHeaders: makeHeaders((wh.WebForward || {}).ResponseHeaders || {}),
}
const makeWebStatic = () => {
const ws = wh.WebStatic || {}
row.getDetails = () => {
return {
StripPrefix: row.StripPrefix.value,
Root: row.Root.value,
ListFiles: row.ListFiles.checked,
ContinueNotFound: row.ContinueNotFound.checked,
ResponseHeaders: row.staticHeaders.get(),
}
}
return dom.table(
dom.tr(
dom.td('Type'),
dom.td(
'StripPrefix',
attr({title: 'Path to strip from the request URL before evaluating to a local path. If the requested URL path does not start with this prefix and ContinueNotFound it is considered non-matching and next WebHandlers are tried. If ContinueNotFound is not set, a file not found (404) is returned in that case.'}),
),
dom.td(
'Root',
attr({title: 'Directory to serve files from for this handler. Keep in mind that relative paths are relative to the working directory of mox.'}),
),
dom.td(
'ListFiles',
attr({title: 'If set, and a directory is requested, and no index.html is present that can be served, a file listing is returned. Results in 403 if ListFiles is not set. If a directory is requested and the URL does not end with a slash, the response is a redirect to the path with trailing slash.'}),
),
dom.td(
'ContinueNotFound',
attr({title: "If a requested URL does not exist, don't return a file not found (404) response, but consider this handler non-matching and continue attempts to serve with later WebHandlers, which may be a reverse proxy generating dynamic content, possibly even writing a static file for a next request to serve statically. If ContinueNotFound is set, HTTP requests other than GET and HEAD do not match. This mechanism can be used to implement the equivalent of 'try_files' in other webservers."}),
),
dom.td(
dom.span(
'Response headers',
attr({title: 'Headers to add to the response. Useful for cache-control, content-type, etc. By default, Content-Type headers are automatically added for recognized file types, unless added explicitly through this setting. For directory listings, a content-type header is skipped.'}),
),
' ',
row.staticHeaders.add,
),
),
dom.tr(
dom.td(
row.type=dom.select(
attr({required: ''}),
dom.option('Static', attr({selected: ''})),
dom.option('Redirect'),
dom.option('Forward'),
function change(e) {
makeType(e.target.value)
},
),
),
dom.td(
row.StripPrefix=dom.input(attr({value: ws.StripPrefix || ''})),
),
dom.td(
row.Root=dom.input(attr({required: '', placeholder: 'web/...', value: ws.Root || ''})),
),
dom.td(
row.ListFiles=dom.input(attr({type: 'checkbox'}), ws.ListFiles ? attr({checked: ''}) : []),
),
dom.td(
row.ContinueNotFound=dom.input(attr({type: 'checkbox'}), ws.ContinueNotFound ? attr({checked: ''}) : []),
),
dom.td(
row.staticHeaders,
),
)
)
}
const makeWebRedirect = () => {
const wr = wh.WebRedirect || {}
row.getDetails = () => {
return {
BaseURL: row.BaseURL.value,
OrigPathRegexp: row.OrigPathRegexp.value,
ReplacePath: row.ReplacePath.value,
StatusCode: row.StatusCode.value ? parseInt(row.StatusCode.value) : 0,
}
}
return dom.table(
dom.tr(
dom.td('Type'),
dom.td(
'BaseURL',
2023-03-09 01:29:44 +03:00
attr({title: 'Base URL to redirect to. The path must be empty and will be replaced, either by the request URL path, or by OrigPathRegexp/ReplacePath. Scheme, host, port and fragment stay intact, and query strings are combined. If empty, the response redirects to a different path through OrigPathRegexp and ReplacePath, which must then be set. Use a URL without scheme to redirect without changing the protocol, e.g. //newdomain/. If a redirect would send a request to a URL with the same scheme, host and path, the WebRedirect does not match so a next WebHandler can be tried. This can be used to redirect all plain http traffic to https.'}),
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
),
dom.td(
'OrigPathRegexp',
attr({title: 'Regular expression for matching path. If set and path does not match, a 404 is returned. The HTTP path used for matching always starts with a slash.'}),
),
dom.td(
'ReplacePath',
attr({title: "Replacement path for destination URL based on OrigPathRegexp. Implemented with Go's Regexp.ReplaceAllString: $1 is replaced with the text of the first submatch, etc. If both OrigPathRegexp and ReplacePath are empty, BaseURL must be set and all paths are redirected unaltered."}),
),
dom.td(
'StatusCode',
attr({title: 'Status code to use in redirect, e.g. 307. By default, a permanent redirect (308) is returned.'}),
),
),
dom.tr(
dom.td(
row.type=dom.select(
attr({required: ''}),
dom.option('Static'),
dom.option('Redirect', attr({selected: ''})),
dom.option('Forward'),
function change(e) {
makeType(e.target.value)
},
),
),
dom.td(
row.BaseURL=dom.input(attr({placeholder: 'empty or https://target/path?q=1#frag or //target/...', value: wr.BaseURL || ''})),
),
dom.td(
row.OrigPathRegexp=dom.input(attr({placeholder: '^/old/(.*)', value: wr.OrigPathRegexp || ''})),
),
dom.td(
row.ReplacePath=dom.input(attr({placeholder: '/new/$1', value: wr.ReplacePath || ''})),
),
dom.td(
row.StatusCode=dom.input(style({width: '4em'}), attr({type: 'number', value: wr.StatusCode || '', min: 300, max: 399})),
),
),
)
}
const makeWebForward = () => {
const wf = wh.WebForward || {}
row.getDetails = () => {
return {
StripPath: row.StripPath.checked,
URL: row.URL.value,
ResponseHeaders: row.forwardHeaders.get(),
}
}
return dom.table(
dom.tr(
dom.td('Type'),
dom.td(
'StripPath',
attr({title: 'Strip the matching WebHandler path from the WebHandler before forwarding the request.'}),
),
dom.td(
'URL',
2023-05-30 23:11:31 +03:00
attr({title: "URL to forward HTTP requests to, e.g. http://127.0.0.1:8123/base. If StripPath is false the full request path is added to the URL. Host headers are sent unmodified. New X-Forwarded-{For,Host,Proto} headers are set. Any query string in the URL is ignored. Requests are made using Go's net/http.DefaultTransport that takes environment variables HTTP_PROXY and HTTPS_PROXY into account. Websocket connections are forwarded and data is copied between client and backend without looking at the framing. The websocket 'version' and 'key'/'accept' headers are verified during the handshake, but other websocket headers, including 'origin', 'protocol' and 'extensions' headers, are not inspected and the backend is responsible for verifying/interpreting them."}),
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
),
dom.td(
dom.span(
'Response headers',
attr({title: 'Headers to add to the response. Useful for adding security- and cache-related headers.'}),
),
' ',
row.forwardHeaders.add,
),
),
dom.tr(
dom.td(
row.type=dom.select(
attr({required: ''}),
dom.option('Static', ),
dom.option('Redirect'),
dom.option('Forward', attr({selected: ''})),
function change(e) {
makeType(e.target.value)
},
),
),
dom.td(
row.StripPath=dom.input(attr({type: 'checkbox'}), wf.StripPath || wf.StripPath === undefined ? attr({checked: ''}) : []),
),
dom.td(
row.URL=dom.input(attr({required: '', placeholder: 'http://127.0.0.1:8888', value: wf.URL || ''})),
),
dom.td(
row.forwardHeaders,
),
),
)
}
// Transform the input fields to match the type of WebHandler.
const makeType = (s) => {
let details
if (s === 'Static') {
details = makeWebStatic()
} else if (s === 'Redirect') {
details = makeWebRedirect()
} else if (s === 'Forward') {
details = makeWebForward()
} else {
throw new Error('unknown handler type')
}
row.details.replaceWith(details)
row.details = details
}
// Remove row from oindex, insert it in nindex. Both in handlerRows and in the DOM.
const moveHandler = (row, oindex, nindex) => {
row.root.remove()
handlersTbody.insertBefore(row.root, handlersTbody.children[nindex])
handlerRows.splice(oindex, 1)
handlerRows.splice(nindex, 0, row)
}
// Row that starts starts with two tables: one for the fields all WebHandlers have
// (in common). And one for the details, i.e. WebStatic, WebRedirect, WebForward.
row.root = dom.tr(
dom.td(
dom.table(
dom.tr(
dom.td('LogName', attr({title: 'Name used during logging for requests matching this handler. If empty, the index of the handler in the list is used.'})),
dom.td('Domain', attr({title: 'Request must be for this domain to match this handler.'})),
dom.td('Path Regexp', attr({title: 'Request must match this path regular expression to match this handler. Must start with with a ^.'})),
2023-08-21 22:52:35 +03:00
dom.td('To HTTPS', attr({title: 'Redirect plain HTTP (non-TLS) requests to HTTPS.'})),
dom.td('Compress', attr({title: 'Transparently compress responses (currently with gzip) if the client supports it, the status is 200 OK, no Content-Encoding is set on the response yet and the Content-Type of the response hints that the data is compressible (text/..., specific application/... and .../...+json and .../...+xml). For static files only, a cache with compressed files is kept.'})),
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
),
dom.tr(
dom.td(
row.LogName=dom.input(attr({value: wh.LogName || ''})),
),
dom.td(
row.Domain=dom.input(attr({required: '', placeholder: 'example.org', value: domainName(wh.DNSDomain)})),
),
dom.td(
row.PathRegexp=dom.input(attr({required: '', placeholder: '^/', value: wh.PathRegexp || ''})),
),
dom.td(
row.ToHTTPS=dom.input(attr({type: 'checkbox', title: 'Redirect plain HTTP (non-TLS) requests to HTTPS'}), !wh.DontRedirectPlainHTTP ? attr({checked: ''}) : []),
),
2023-08-21 22:52:35 +03:00
dom.td(
row.Compress=dom.input(attr({type: 'checkbox', title: 'Transparently compress responses.'}), wh.Compress ? attr({checked: ''}) : []),
),
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
),
),
// Replaced with a call to makeType, below (and later when switching types).
row.details=dom.table(),
),
dom.td(
dom.td(
dom.button('Remove', attr({type: 'button'}), function click(e) {
handlerRows = handlerRows.filter(r => r !== row)
row.root.remove()
nohandler.style.display = handlerRows.length ? 'none' : ''
}),
' ',
// We show/hide the buttons to move when clicking the Move button.
row.moveButtons=dom.span(
style({display: 'none'}),
dom.button('↑↑', attr({type: 'button', title: 'Move to top.'}), function click(e) {
const index = handlerRows.findIndex(r => r === row)
if (index > 0) {
moveHandler(row, index, 0)
}
}),
' ',
dom.button('↑', attr({type: 'button', title: 'Move one up.'}), function click(e) {
const index = handlerRows.findIndex(r => r === row)
if (index > 0) {
moveHandler(row, index, index-1)
}
}),
' ',
dom.button('↓', attr({type: 'button', title: 'Move one down.'}), function click(e) {
const index = handlerRows.findIndex(r => r === row)
if (index+1 < handlerRows.length ) {
moveHandler(row, index, index+1)
}
}),
' ',
dom.button('↓↓', attr({type: 'button', title: 'Move to bottom.'}), function click(e) {
const index = handlerRows.findIndex(r => r === row)
if (index+1 < handlerRows.length ) {
moveHandler(row, index, handlerRows.length-1)
}
}),
),
),
),
)
// Final "get" that returns a WebHandler that reflects the UI.
row.get = () => {
const wh = {
LogName: row.LogName.value,
Domain: row.Domain.value,
PathRegexp: row.PathRegexp.value,
DontRedirectPlainHTTP: !row.ToHTTPS.checked,
2023-08-21 22:52:35 +03:00
Compress: row.Compress.checked,
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
}
const s = row.type.value
const details = row.getDetails()
if (s === 'Static') {
wh.WebStatic = details
} else if (s === 'Redirect') {
wh.WebRedirect = details
} else if (s === 'Forward') {
wh.WebForward = details
}
return wh
}
// Initialize one of the Web* types.
let s
if (wh.WebStatic) {
s = 'Static'
} else if (wh.WebRedirect) {
s = 'Redirect'
} else if (wh.WebForward) {
s = 'Forward'
}
makeType(s)
handlerRows.push(row)
return row
}
// Return webserver config to store.
const gatherConf = () => {
return {
WebDomainRedirects: redirectRows.map(row => row.get()),
WebHandlers: handlerRows.map(row => row.get()),
}
}
// Add and move buttons, both above and below the table for quick access, hence a function.
const handlerActions = () => {
return [
'Action ',
dom.button('Add', attr({type: 'button'}), function click(e) {
// New WebHandler added as WebForward. Good chance this is what the user wants. And
// it has the least fields. (;
const nwh = {
LogName: '',
DNSDomain: {ASCII: ''},
PathRegexp: '^/',
DontRedirectPlainHTTP: false,
WebForward: {
StripPath: true,
URL: '',
},
}
const row = handlerRow(nwh)
handlersTbody.appendChild(row.root)
nohandler.style.display = handlerRows.length ? 'none' : ''
}),
' ',
dom.button('Move', attr({type: 'button'}), function click(e) {
for(const row of handlerRows) {
row.moveButtons.style.display = row.moveButtons.style.display === 'none' ? '' : 'none'
}
}),
]
}
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Admin', '#'),
'Webserver config',
),
dom.form(
fieldset=dom.fieldset(
dom.h2('Domain redirects', attr({title: 'Corresponds with WebDomainRedirects in domains.conf'})),
dom.p('Incoming requests for these domains are redirected to the target domain, with HTTPS.'),
dom.table(
dom.thead(
dom.tr(
dom.th('From'),
dom.th('To'),
dom.th(
'Action ',
dom.button('Add', attr({type: 'button'}), function click(e) {
const row = redirectRow([{ASCII: ''}, {ASCII: ''}])
redirectsTbody.appendChild(row.root)
noredirect.style.display = redirectRows.length ? 'none' : ''
}),
),
),
),
redirectsTbody=dom.tbody(
(conf.WebDNSDomainRedirects || []).sort().map(t => redirectRow(t)),
noredirect=dom.tr(
style({display: redirectRows.length ? 'none' : ''}),
dom.td(attr({colspan: 3}), 'No redirects.'),
),
),
),
dom.br(),
dom.h2('Handlers', attr({title: 'Corresponds with WebHandlers in domains.conf'})),
dom.p('Each incoming request is check against these handlers, in order. The first matching handler serves the request.'),
dom('table.long',
dom.thead(
dom.tr(
dom.th(),
dom.th(handlerActions()),
),
),
handlersTbody=dom.tbody(
(conf.WebHandlers || []).map(wh => handlerRow(wh)),
nohandler=dom.tr(
style({display: handlerRows.length ? 'none' : ''}),
dom.td(attr({colspan: 2}), 'No handlers.'),
),
),
dom.tfoot(
dom.tr(
dom.th(),
dom.th(handlerActions()),
),
),
),
dom.br(),
dom.button('Save', attr({type: 'submit'}), attr({title: 'Save config. If the configuration has changed since this page was loaded, an error will be returned. After saving, the changes take effect immediately.'})),
),
async function submit(e) {
e.preventDefault()
e.stopPropagation()
fieldset.disabled = true
try {
const newConf = gatherConf()
const savedConf = await api.WebserverConfigSave(conf, newConf)
conf = savedConf
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
} finally {
fieldset.disabled = false
}
}
),
)
}
2023-01-30 16:27:06 +03:00
const init = async () => {
let curhash
const page = document.getElementById('page')
const hashChange = async () => {
if (curhash === window.location.hash) {
return
}
let h = decodeURIComponent(window.location.hash)
if (h !== '' & & h.substring(0, 1) == '#') {
h = h.substring(1)
}
const t = h.split('/')
page.classList.add('loading')
try {
if (h == '') {
await index()
} else if (h === 'config') {
await config()
2023-02-06 17:17:46 +03:00
} else if (h === 'loglevels') {
await loglevels()
2023-01-30 16:27:06 +03:00
} else if (h === 'accounts') {
await accounts()
} else if (t[0] === 'accounts' & & t.length === 2) {
await account(t[1])
} else if (t[0] === 'domains' & & t.length === 2) {
await domain(t[1])
} else if (t[0] === 'domains' & & t.length === 3 & & t[2] === 'dmarc') {
await domainDMARC(t[1])
} else if (t[0] === 'domains' & & t.length === 4 & & t[2] === 'dmarc' & & parseInt(t[3])) {
await domainDMARCReport(t[1], parseInt(t[3]))
} else if (t[0] === 'domains' & & t.length === 3 & & t[2] === 'tlsrpt') {
await domainTLSRPT(t[1])
} else if (t[0] === 'domains' & & t.length === 4 & & t[2] === 'tlsrpt' & & parseInt(t[3])) {
await domainTLSRPTID(t[1], parseInt(t[3]))
} else if (t[0] === 'domains' & & t.length === 3 & & t[2] === 'dnscheck') {
await domainDNSCheck(t[1])
} else if (t[0] === 'domains' & & t.length === 3 & & t[2] === 'dnsrecords') {
await domainDNSRecords(t[1])
} else if (h === 'queue') {
await queueList()
} else if (h === 'tlsrpt') {
await tlsrpt()
} else if (h === 'dmarc') {
2023-11-01 19:55:40 +03:00
await dmarcIndex()
} else if (h === 'dmarc/reports') {
await dmarcReports()
} else if (h === 'dmarc/evaluations') {
await dmarcEvaluations()
} else if (t[0] == 'dmarc' & & t[1] == 'evaluations' & & t.length === 3) {
await dmarcEvaluationsDomain(t[2])
2023-01-30 16:27:06 +03:00
} else if (h === 'mtasts') {
await mtasts()
} else if (h === 'dnsbl') {
await dnsbl()
improve webserver, add domain redirects (aliases), add tests and admin page ui to manage the config
- make builtin http handlers serve on specific domains, such as for mta-sts, so
e.g. /.well-known/mta-sts.txt isn't served on all domains.
- add logging of a few more fields in access logging.
- small tweaks/bug fixes in webserver request handling.
- add config option for redirecting entire domains to another (common enough).
- split httpserver metric into two: one for duration until writing header (i.e.
performance of server), another for duration until full response is sent to
client (i.e. performance as perceived by users).
- add admin ui, a new page for managing the configs. after making changes
and hitting "save", the changes take effect immediately. the page itself
doesn't look very well-designed (many input fields, makes it look messy). i
have an idea to improve it (explained in admin.html as todo) by making the
layout look just like the config file. not urgent though.
i've already changed my websites/webapps over.
the idea of adding a webserver is to take away a (the) reason for folks to want
to complicate their mox setup by running an other webserver on the same machine.
i think the current webserver implementation can already serve most common use
cases. with a few more tweaks (feedback needed!) we should be able to get to 95%
of the use cases. the reverse proxy can take care of the remaining 5%.
nevertheless, a next step is still to change the quickstart to make it easier
for folks to run with an existing webserver, with existing tls certs/keys.
that's how this relates to issue #5.
2023-03-02 20:15:54 +03:00
} else if (h === 'webserver') {
await webserver()
2023-01-30 16:27:06 +03:00
} else {
dom._kids(page, 'page not found')
}
} catch (err) {
console.log('error', err)
window.alert('Error: ' + err.message)
curhash = window.location.hash
return
}
curhash = window.location.hash
page.classList.remove('loading')
}
window.addEventListener('hashchange', hashChange)
hashChange()
}
window.addEventListener('load', init)
< / script >
< / body >
< / html >