I've run DNS servers in the past - BIND and pdns. I've now gone all in ... because ... well it started with ACME.
As the OP states you can get a registrar to host a domain for you and then you create a subdomain anywhere you fancy and that includes at home. Do get the glue records right and do use dig to work out what is happening.
Now with a domain under your own control, you can use CNAME records in other zones to point at your zones and if you have dynamic DNS support on your zones (RFC 2136) then you can now support ACME ie Lets Encrypt and Zerossl and co.
Sadly certbot doesn't do (or it didn't) CNAME redirects for ACME. However, acme.sh and simple-acme do and both are absolutely rock solid. Both of those projects are used by a lot of people and well trod.
simple-acme is for Windows. It has loads of add on scripts to deal with scenarios. Those scripts seem to be deprecated but work rather well. Quite a lot of magic here that an old school Linux sysadmin is glad of.
PowerDNS auth server supports dynamic DNS and you can filter access by IP and TSIG-KEY, per zone and/or globally.
Join the dots.
[EDIT: Speling, conjunction switch]
show comments
stego-tech
I've found that teaching DNS is an excellent gateway to learning about how the internet itself works, especially to "green" tech folks who go blank-faced when you get into protocols, IPs, etc.
Break out a piece of mail, connect the dots, and you see their eyes light up with comprehension. "Oh, so that's how my computer gets to google.com; it's just like how my postman knows where to deliver my mail!" Then a critical component is demystified, and they want to learn more.
Running a DNS server is honestly such a good activity for folks in general.
defanor
I prefer and use the knot DNS server for authoritative DNS (and either knot-resolver or Unbound for caching DNS servers) myself: it is quite feature-rich, including DNSSEC, RFC 2136 support, an easy master-slave setup. Apparently it does support database-based configuration and zone definitions, too, but I find file-based storage to be simpler.
show comments
Adachi91
I've been running BIND for quite a long time now, and I've been very happy with it, very few issues other than my own folly, since I'm not on a static IP in the past 15 years my IP has changed 4 times (1 time due to a router change, 3 times due to Comcast outages), I didn't catch the last IP swap for over a month.
Which brings me to a rather big gripe about other resolvers not respecting TTL, 70% of https://www.whatsmydns.net/ reported it could not query A names, while 30% were like "Yeah here you go" from their cache.
I fixed the glue and got everything back up, I need to write an automated script to check every day if my IP has changed and alert me to update my glue record at my registar.
I use a lot of mix and match scripts to maintain other aspects like challenges for DNS e.g. Letsencrypt, I'll use their hooks to update my DNS, resign it (DNSSEC), complete the challenge, then cleanup. My more personal domains I don't use DNSSEC so I just skip right ahead.
I quite enjoy handling my own DNS records, BIND has been really good to me and I love their `view "external"` and `view "internal"` scopes so I can give the world my authoritative records, and internally serve my intranet and other services like pihole (which sits behind BIND)
BatteryMountain
Get a mini-pc with 2x LAN ports + a mediatek Wifi 6/7 module. Install Proxmox. Make 3 VM's: OpenWrt (or router firmware of choice), unbound and adguard home. Plug your fibre into lan port, plug rest of network into other lan port. In proxmox, set pcie passthrough for one of the Lan ports and the wifi card. Setup openwrt to connect to your isp and points its dns to you adguard home server. Point your adguard home server to your unbound server as upstream. This is a good starting point if you want to get a feel for running your own router + dns. You don't need to use off the shelf garbage routers; x86/x64 routers are the best. On openwrt I configure a special traffic queue so that I don't have buffer overflows, so my connection is super stable and low latency. Combined with the adguard + unbound dns setup, my internet connection is amazingly fast compared to traditional routers.
Better yet, set up ssh to the proxmox server and ask claude code to set it up for you, works like a charm! claude can call ssh and dig and verify that your dns chains work, it can test your firewall and ports (basically running pen tests against yourself..), it can sort out almost any issue (I had intel wifi card and had firmware locks on broadcasting in 5GHZ spectrum in AP Mode - mediatek doesn't - claude helped try to override firmware in kernel but intel firmware won't budge). It can setup automatic nightly updates that are safe, it can help you setup recovery/backup plans (which runs before updates), it can automate certain proxmox tasks (periodic snapshotting of vm's) and best of all, it can document the entire infrastructure comprehensively each time I make changes to it.
show comments
1vuio0pswjnm7
You can also serve a root.zone on that DNS server and it does not have to a carbon copy of ICANN's root.zone. I have been doing this for over 15 years. I've tried many DNS server software projects over that time and I always come back to djbdns
Multiple comments in this thread refer to TLS certificates
Why is payment to and/or permission from a third party "necessary" to encrypt data in transit over the a computer network, whether it's a LAN or an internet. What does this phoney "requirement" achieve
For example, why is it "necessary" to purchase a domain name registration from an "ICANN-approved" registrar in order to use a TLS certificate
Is obtaining a domain name registration from an "ICANN-approved" registrar proof of identity for purposes of "authentication". What purpose does _purchasing_ a registration serve. For example, similar to "free" Let's Encrypt certificates, domain names could also be "free"
Whatever "authentication" ICANN and its "approved" registries and registrars are doing, e.g., none, is it possible someone else could do it better using a different approach
This comment is not asking for answers to these questions; the question are rhetorical. Of course the questions may trigger defensive replies; everyone is entitled to an opinion and opinions may differ
show comments
emithq
One thing worth noting if you're using your own DNS for Let's Encrypt DNS-01 challenges: make sure your authoritative server supports the RFC 2136 dynamic update protocol, or you'll end up writing custom API shims for every ACME client. PowerDNS has solid RFC 2136 support out of the box and pairs well with Certbot's --preferred-challenges dns-01 flag. BIND works too but the ACL configuration for allowing dynamic updates from specific IPs is fiddly to get right the first time.
dwedge
I've been tempted by this because I self host everything else, but "adding an entry to postgres instead of using namecheap gui" is overkill, just use a DNS with an API.
Last few days I've been migrating everything to luadns format, stored in github and then I have github actions triggering a script to convert it to octodns and apply it.
I could have just used either, but I like the luadns format but didn't want to be stuck using them as a provider
show comments
icedchai
I've been running authoritative and caching DNS servers since the 90's. BIND is still my go-to because I am familiar with it.
kev009
unbound and nsd for me, always run my own recursor and authority.
WaitWaitWha
Running DNSMasq on an old RasPI & USB SSD. No problems no issues. Just quietly runs in the background.
show comments
vardalab
I have run technitium for 4 or so years now, in a recursive mode, handles all my homelab needs and it is faster as well. Now that it has clustering support I have three instances in my proxmox cluster.
fullstop
I've been running tinydns for decades now. I don't even think about it anymore.
show comments
rmoriz
Still running DNS without a database and immutable. Push-based deployment.
show comments
micw
I'd like to run my personal DNS server for privacy reasons on a cheap VPS. But how can I make it available to me only? There's no auth on DNS, right?
show comments
karel-3d
If you run powerdns auth, consider front-running it with dnsdist (also from powerdns).
(disclaimer: I contribute a tiny bit to dnsdist.)
tzury
Just remember, if you run your own DNS, and you do so for a mission critical platform, the platform is exposed to a udp DDoS that will be hard to detect let alone prevent.
Unless of course you will invest 5-6 figures worth of US dollars worth of equipment, which by then you can look back and ask yourself, was I better off with Google Cloud DNS, AWS Route 53 and the likes.
show comments
bpbp-mango
I run PowerDNS as both authoritative and recursing at my ISP job. Great piece of software.
deepsun
How to make it DNSSEC?
show comments
justsomehnguy
> writing zone files with some arcane syntax that BIND 9 is apparently famous of
I've run DNS servers in the past - BIND and pdns. I've now gone all in ... because ... well it started with ACME.
As the OP states you can get a registrar to host a domain for you and then you create a subdomain anywhere you fancy and that includes at home. Do get the glue records right and do use dig to work out what is happening.
Now with a domain under your own control, you can use CNAME records in other zones to point at your zones and if you have dynamic DNS support on your zones (RFC 2136) then you can now support ACME ie Lets Encrypt and Zerossl and co.
Sadly certbot doesn't do (or it didn't) CNAME redirects for ACME. However, acme.sh and simple-acme do and both are absolutely rock solid. Both of those projects are used by a lot of people and well trod.
acme.sh is ideal for unix gear and if you follow this blokes method of installation: https://pieterbakker.com/acme-sh-installation-guide-2025/ usefully centralised.
simple-acme is for Windows. It has loads of add on scripts to deal with scenarios. Those scripts seem to be deprecated but work rather well. Quite a lot of magic here that an old school Linux sysadmin is glad of.
PowerDNS auth server supports dynamic DNS and you can filter access by IP and TSIG-KEY, per zone and/or globally.
Join the dots.
[EDIT: Speling, conjunction switch]
I've found that teaching DNS is an excellent gateway to learning about how the internet itself works, especially to "green" tech folks who go blank-faced when you get into protocols, IPs, etc.
Break out a piece of mail, connect the dots, and you see their eyes light up with comprehension. "Oh, so that's how my computer gets to google.com; it's just like how my postman knows where to deliver my mail!" Then a critical component is demystified, and they want to learn more.
Running a DNS server is honestly such a good activity for folks in general.
I prefer and use the knot DNS server for authoritative DNS (and either knot-resolver or Unbound for caching DNS servers) myself: it is quite feature-rich, including DNSSEC, RFC 2136 support, an easy master-slave setup. Apparently it does support database-based configuration and zone definitions, too, but I find file-based storage to be simpler.
I've been running BIND for quite a long time now, and I've been very happy with it, very few issues other than my own folly, since I'm not on a static IP in the past 15 years my IP has changed 4 times (1 time due to a router change, 3 times due to Comcast outages), I didn't catch the last IP swap for over a month.
Which brings me to a rather big gripe about other resolvers not respecting TTL, 70% of https://www.whatsmydns.net/ reported it could not query A names, while 30% were like "Yeah here you go" from their cache.
I fixed the glue and got everything back up, I need to write an automated script to check every day if my IP has changed and alert me to update my glue record at my registar.
I use a lot of mix and match scripts to maintain other aspects like challenges for DNS e.g. Letsencrypt, I'll use their hooks to update my DNS, resign it (DNSSEC), complete the challenge, then cleanup. My more personal domains I don't use DNSSEC so I just skip right ahead.
I quite enjoy handling my own DNS records, BIND has been really good to me and I love their `view "external"` and `view "internal"` scopes so I can give the world my authoritative records, and internally serve my intranet and other services like pihole (which sits behind BIND)
Get a mini-pc with 2x LAN ports + a mediatek Wifi 6/7 module. Install Proxmox. Make 3 VM's: OpenWrt (or router firmware of choice), unbound and adguard home. Plug your fibre into lan port, plug rest of network into other lan port. In proxmox, set pcie passthrough for one of the Lan ports and the wifi card. Setup openwrt to connect to your isp and points its dns to you adguard home server. Point your adguard home server to your unbound server as upstream. This is a good starting point if you want to get a feel for running your own router + dns. You don't need to use off the shelf garbage routers; x86/x64 routers are the best. On openwrt I configure a special traffic queue so that I don't have buffer overflows, so my connection is super stable and low latency. Combined with the adguard + unbound dns setup, my internet connection is amazingly fast compared to traditional routers.
Better yet, set up ssh to the proxmox server and ask claude code to set it up for you, works like a charm! claude can call ssh and dig and verify that your dns chains work, it can test your firewall and ports (basically running pen tests against yourself..), it can sort out almost any issue (I had intel wifi card and had firmware locks on broadcasting in 5GHZ spectrum in AP Mode - mediatek doesn't - claude helped try to override firmware in kernel but intel firmware won't budge). It can setup automatic nightly updates that are safe, it can help you setup recovery/backup plans (which runs before updates), it can automate certain proxmox tasks (periodic snapshotting of vm's) and best of all, it can document the entire infrastructure comprehensively each time I make changes to it.
You can also serve a root.zone on that DNS server and it does not have to a carbon copy of ICANN's root.zone. I have been doing this for over 15 years. I've tried many DNS server software projects over that time and I always come back to djbdns
Multiple comments in this thread refer to TLS certificates
Why is payment to and/or permission from a third party "necessary" to encrypt data in transit over the a computer network, whether it's a LAN or an internet. What does this phoney "requirement" achieve
For example, why is it "necessary" to purchase a domain name registration from an "ICANN-approved" registrar in order to use a TLS certificate
Is obtaining a domain name registration from an "ICANN-approved" registrar proof of identity for purposes of "authentication". What purpose does _purchasing_ a registration serve. For example, similar to "free" Let's Encrypt certificates, domain names could also be "free"
Whatever "authentication" ICANN and its "approved" registries and registrars are doing, e.g., none, is it possible someone else could do it better using a different approach
This comment is not asking for answers to these questions; the question are rhetorical. Of course the questions may trigger defensive replies; everyone is entitled to an opinion and opinions may differ
One thing worth noting if you're using your own DNS for Let's Encrypt DNS-01 challenges: make sure your authoritative server supports the RFC 2136 dynamic update protocol, or you'll end up writing custom API shims for every ACME client. PowerDNS has solid RFC 2136 support out of the box and pairs well with Certbot's --preferred-challenges dns-01 flag. BIND works too but the ACL configuration for allowing dynamic updates from specific IPs is fiddly to get right the first time.
I've been tempted by this because I self host everything else, but "adding an entry to postgres instead of using namecheap gui" is overkill, just use a DNS with an API.
Last few days I've been migrating everything to luadns format, stored in github and then I have github actions triggering a script to convert it to octodns and apply it.
I could have just used either, but I like the luadns format but didn't want to be stuck using them as a provider
I've been running authoritative and caching DNS servers since the 90's. BIND is still my go-to because I am familiar with it.
unbound and nsd for me, always run my own recursor and authority.
Running DNSMasq on an old RasPI & USB SSD. No problems no issues. Just quietly runs in the background.
I have run technitium for 4 or so years now, in a recursive mode, handles all my homelab needs and it is faster as well. Now that it has clustering support I have three instances in my proxmox cluster.
I've been running tinydns for decades now. I don't even think about it anymore.
Still running DNS without a database and immutable. Push-based deployment.
I'd like to run my personal DNS server for privacy reasons on a cheap VPS. But how can I make it available to me only? There's no auth on DNS, right?
If you run powerdns auth, consider front-running it with dnsdist (also from powerdns).
(disclaimer: I contribute a tiny bit to dnsdist.)
Just remember, if you run your own DNS, and you do so for a mission critical platform, the platform is exposed to a udp DDoS that will be hard to detect let alone prevent.
Unless of course you will invest 5-6 figures worth of US dollars worth of equipment, which by then you can look back and ask yourself, was I better off with Google Cloud DNS, AWS Route 53 and the likes.
I run PowerDNS as both authoritative and recursing at my ISP job. Great piece of software.
How to make it DNSSEC?
> writing zone files with some arcane syntax that BIND 9 is apparently famous of
gawd just install webmin ffs
[flagged]