I was migrating my current server to a new one + ofc distro change from debian to arch.
Switching from debian to arch on production is highly debatable :D esp for security patches and staying bleeding edge isn't really a normal approach to ensure a calm life.
Also arch tends to solve things in a different way, often even more plain 'linux' than debian or ubuntu. What I mean by that, is that if you want for example run gitlab on arch, you have to build it to some degree from scratch, since they embed it into the default architecture.
This also means they register some processes with systemd and not within the 'gitlab' command system.
So the things I had to do:
- dovecot + postfix + postfixadmin + letsencrypt - for mails
- haproxy + nginx + letsencrypt for http
- nextcloud + pgsql for my 'easy cloud'
- netdata for monitoring
- seafile + seahub + sqlite for my 'data sharing'
- nftables + fail2ban for security
- openvpn for deployments to other systems
- gitlab, gitlab-runner for my ticketing, CI/CD
- mysql, php, rust, python, go, ruby, gcc, git for basic development / build purposes
- matrix as chat server to get rid of slack
In this blogpost I will just cover my
- Haproxy + nginx + letsencrypt setup
- netdata
- nftables + fail2ban
and even that only in a superficial manner it takes a lot of time and research to build a 'relative' save system and actually I would have to add selinux etc to it.
My basics
first I installed my defaults for every system
pacman -S htop vim strace glances
- htop is my default process management tool.
- glances is a new one I am testing at the moment
- vim is my default editor
- strace is my default 'lets look what's happening tool'
with these tools I can start building the rest.
HTTP Servers / Proxy / TLS Authority
pacman -S nginx haproxy certbot
systemctl enable nginx
systemctl start nginx
systemctl enable haproxy
systemctl start haproxy
I prefer the debian architecture using symlinks in the sites-available / sites-enabled structure.
so I enabled it in nginx by creating the two folders in /etc/nginx
mkdir /etc/nginx/sites-available
mkdir /etc/nginx/sites-enabled
by simply adding
http {
.....
include /etc/nginx/sites-enabled/*;
}
in the /etc/nginx/nginx.conf
I can now just disable and enable webpages by creating the config in sites-available and use a symbolic link to 'enable' them in sites-enabled
next is my haproxy config
global
log /dev/log local0
log /dev/log local1 notice
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
maxconn 50000
ssl-default-bind-options no-sslv3 no-tls-tickets force-tlsv12
ssl-default-bind-ciphers EECDH+AESGCM:EDH+AESGCM
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option contstats
option http-server-close
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
stats enable
stats uri /haproxy?show
stats realm Strictly\ Private
stats auth user:password
userlist basic-auth-list
group is-regular-user
group is-admin
user <myuser> password <my-passwordhash> groups is-admin
frontend http
bind *:80
default_backend example-nginx-http
mode http
option forwardfor
option http-server-close
option httpclose
acl host_gitlab hdr(host) -i gitlab.example.com
acl host_nextcloud hdr(host) -i nextcloud.example.com
acl host_netdata hdr(host) -i netdata.example.com
acl host_postfixadmin hdr(host) -i postfixadmin.example.com
redirect scheme https code 301 if { hdr(Host) -i gitlab.example.com } !{ ssl_fc }
redirect scheme https code 301 if { hdr(Host) -i netdata.example.com } !{ ssl_fc }
redirect scheme https code 301 if { hdr(Host) -i postfixadmin.example.com } !{ ssl_fc }
redirect scheme https code 301 if { hdr(Host) -i nextcloud.example.com } !{ ssl_fc }
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
use_backend letsencrypt-backend if letsencrypt-acl
frontend https
bind *:443 ssl crt /etc/haproxy/ssl/a-cert.pem
option httplog
option forwardfor
option http-server-close
option httpclose
acl host_gitlab hdr(host) -i gitlab.example.com
acl host_netdata hdr(host) -i netdata.example.com
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
use_backend letsencrypt-backend if letsencrypt-acl
use_backend gitlab-nginx-https if host_gitlab
use_backend netdata-https if host_netdata
default_backend example-nginx-http
backend netdata-https
acl devops-auth http_auth_group(basic-auth-list) is-admin
http-request auth realm devops unless devops-auth
server netdata 0.0.0.0:19999 maxconn 1000
backend letsencrypt-backend
server letsencrypt 0.0.0.0:8888
backend example-nginx-http
balance leastconn
option httpclose
option forwardfor
# server pool
server example-nginx 0.0.0.0:8089 check
backend example-nginx-https
balance leastconn
option httpclose
option forwardfor
option httplog
option abortonclose
server example-nginx-tls 0.0.0.0:8444 check ssl verify none
backend gitlab-nginx-https
balance leastconn
option httpclose
option forwardfor
option httplog
option abortonclose
# server pool
server nginx1 0.0.0.0:8443 check ssl verify none
I prefer haproxy to nginx as a proxy simply because
- a) I am used to it
- b) nginx has less options for loadbalancing
it's more of a taste question on my level of performance and system load
I added some acls for my netdata so not everyone can look at my server stats and in theory it would allow me to match acls with frontends on a more global level. But for me this is good enough.
userlist basic-auth-list
group is-regular-user
group is-admin
user <myuser> password <my-passwordhash> groups is-admin
backend netdata-https
acl devops-auth http_auth_group(basic-auth-list) is-admin
http-request auth realm devops unless devops-auth
server netdata 0.0.0.0:19999 maxconn 1000
Also since i only send to local internal sockets I didn't care that much of encrypting the proxy forwards from haproxy to my nginx or other systems. (on a production system with multiple server endpoints it's a must!) also I should point out that actually just fowarding tcp for SSL if you don't need any package matches, above level 4 OSI, makes actually more sense.
in this case I just like my SSL certs in one place.
the clause
frontend http
...
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
use_backend letsencrypt-backend if letsencrypt-acl
...
frontend https
...
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
use_backend letsencrypt-backend if letsencrypt-acl
....
backend letsencrypt-backend
server letsencrypt 0.0.0.0:8888
allows me to refresh all certs via lets encrypt
sudo certbot certonly --standalone --preferred-challenges http -d test.example.com --http-01-port=8888
I could go in details about security headers etc but that would be to much I guess.
for more information please look here:
so I got my
- haproxy + nginx + letsencrypt running
as you can see I already added my netdata endpoint as well.
Installing netdata
pacman -S netdata
systemctl enable netdata
systemctl start netdata
that's actually it ;D ... now I can realtime monitor my server github.com/netdata/netdata
actually I should add something like Icinga as well for better monitoring since Icinga can be easily integrated with a push client or send emails to notify you about problems. It's on my todo list.
fail2ban
fail2ban is a basic bruteforce protection every IP that failed a certain amount of time at a registered 'jail' service will be blocked for a certain amount of time.
pacman -S fail2ban
systemctl enable fail2ban
per default in arch nothing is enabled so we need to change the config file
vim /etc/fail2ban/jail.conf
the number 1 attack vector is ssh (we also will enable key only) but more later
we add the following line to the sshd
[sshd]
.....
enabled = true
.....
and start the service
systemctl start fail2ban
nftables
pacman -S nftables
systemctl enable nftables
systemctl start nftables
we now can look at our current active rules
nft list tables
my configuration is as follows
lush ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
# established/related connections
ct state established,related accept
# invalid connections
ct state invalid drop
# loopback interface
iif lo accept
# ICMP & IGMP
ip6 nexthdr icmpv6 icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, mld-listener-query, mld-listener-report, mld-listener-reduction, nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, ind-neighbor-solicit, ind-neighbor-advert, mld2-listener-report } accept
ip protocol icmp icmp type { destination-unreachable, router-solicitation, router-advertisement, time-exceeded, parameter-problem } accept
ip protocol igmp accept
# SSH (port 22)
tcp dport ssh accept
# HTTP (ports 80 & 443)
tcp dport { http, https } accept
tcp dport 25 accept
tcp dport 465 accept
tcp dport 143 accept
tcp dport 8888 accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
important is that I block certain ICMP and IGMP packages
ip6 nexthdr icmpv6 icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, mld-listener-query, mld-listener-report, mld-listener-reduction, nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, ind-neighbor-solicit, ind-neighbor-advert, mld2-listener-report } accept
ip protocol icmp icmp type { destination-unreachable, router-solicitation, router-advertisement, time-exceeded, parameter-problem } accept
ip protocol igmp accept
so for example you cannot ping my server and portscans etc are harder on my machine. Because if you don't use the right ports and packages .... it won't even respond.
for my basic services
tcp dport ssh accept
tcp dport { http, https } accept
tcp dport 25 accept
tcp dport 465 accept
tcp dport 143 accept
tcp dport 8888 accept
I accept the following external port connections.
ssh: 22
http: 80, 443
mail: 25, 465, 143
letsencrypt: 8888
so my databases, etc ares not exposed. most of my services run with different internal ports and are accessible via haproxy.
I have to do more security hardening but as an initial layer this seams okay.