Squid Proxy Server with Clustering using Corosync, Pacemaker and PCS




Aim: 
Set up a squid proxy server in clustered environment using pacemaker, corosync and PCS. You can use this for other cluster setups like Httpd also.

Requirements:
Get two servers with similar OS and configs. My setup is on CentOS 7.4 minimal install
Squid on node1 - xx.xx.xx.80
Squid on node2 - xx.xx.xx.137
Cluster IP1 : xx.xx.xx.89
Cluster IP2 : xx.xx.xx.142

Steps 1 to 9 to be run on all cluster servers - node1, node2
1) Install the corosync, pacemaker and pcs
# yum install -y corosync pcs pacemaker


2) Disable selinux
nano /etc/sysconfig/selinux
and change it to disabledas below
SELINUX=disabled


3) Add required firewall rules to permit all required connections.

firewall-cmd --permanent --zone=internal --change-interface=ifcfg-ens160 //change the nic in to public to internal zone
firewall-cmd --zone=internal --add-service=ssh --permanent
firewall-cmd --zone=internal --add-service=http --permanent
firewall-cmd --zone=internal --add-service=https --permanent
firewall-cmd --zone=internal --add-port=3126/tcp --permanent
firewall-cmd --zone=internal --add-port=3127/tcp --permanent
firewall-cmd --zone=internal --add-port=3128/tcp --permanent
firewall-cmd --zone=internal --add-port=5404/udp --permanent
firewall-cmd --zone=internal --add-port=5405/udp --permanent

Note: if you experience any issues with firewall simply disable it using

# systemctl disable firewalld //disable firewall
# systemctl stop firewalld // stop firewall service


4) After that install net-tools package, it is very important for squid proxy HA, as default CentOS 7 do not come with netstat command but squid ocf:heartbeatagent:Squid will use the netstat command to check the squid service on both nodes
# yum install net-tools -y //network tools (otherwise squid HA resource agent doesn't start )


5) Configure the node names in hosts file because pacemaker and corosync will use node names only
# vi /etc/hosts


6) Installing Squid Proxy and updating config file
# yum install -y squid //to install the squid proxy
# vi /etc/squid/squid.conf
# Add below in squid conf file
# Allows the squid port and IP on which service runs in this server. IP different in different squid servers
http_port 3128
http_port 10.64.30.80:3128



7) Start the service for squid
# systemctl start squid //start the squid service (you must run this on both nodes)
# systemctl enable squid //start the squid service after every boot (you must run this on both nodes)
# netstat -ntlp | grep 3128


8) Configure the password for hacluster user. this username created during the pacemaker and corosync installation
User: hacluster
Pass: xxxxxxx //create hacluster password must be same on both nodes


9) Start the pacemaker service and set it to start at booting
# systemctl start pcsd //start the pcsd service
# systemctl enable pcsd //adding as startup service


From step 10 to 17, run on single node. Any node is fine.
10) Starting the cluster configuration on single node
# pcs cluster auth node1 node2 //execute this on only one node to check the authentication of hacluster


11) Setup the cluster with the name squid_clu
# pcs cluster setup --name squid_clu node1 node2 //setup cluster with clustername squid_clu


12) Starting the cluster service
# pcs cluster start --all //starting cluster on all servers
# pcs cluster enable --all //adding as startup service


13) Below commands will useful for monitoring and trouble shooting
# pcs status cluster
# pcs status nodes
# corosync-cmapctl | grep members
# pcs status corosync


14) Disabling the Quorum and Stonith since split brain or the cluster eviction is not desirable in a 2 node cluster. In a cluster with three or more nodes, you can enable it.
# pcs property set stonith-enabled=false //disable stonith
# pcs property set no-quorum-policy=ignore //disable the quorum


15) Creating the cluster with multiple Virtual IPs xx.xx.xx.89 and xx.xx.xx.142
# pcs resource create virtual_ip1 ocf:heartbeat:IPaddr2 ip=xx.xx.xx.89 cidr_netmask=24 op monitor interval=1s meta target-role="Started"
# pcs resource create virtual_ip2 ocf:heartbeat:IPaddr2 ip=xx.xx.xx.142 cidr_netmask=24 op monitor interval=1s meta target-role="Started"


16) Check the virtual IP status use below command
pcs status | grep virtual_ip


17) Restart squid
# systemctl restart squid


Steps 18 to 22, run on one node
18) Adding virtual squid service using ocf resource. I am using the resource name squidproxy which is marked as red in below command.
# pcs resource create squidproxy ocf:heartbeat:Squid squid_exe="/usr/sbin/squid" squid_conf="/etc/squid/squid.conf" squid_pidfile="/var/run/squid.pid" squid_port="3128" squid_stop_timeout="30" op start interval="0" timeout="60s" op stop interval="0" timeout="120s" op monitor interval="1s" timeout="30s" meta target-role="Started"


19) Binding/grouping the virtual IPs and squid together
# pcs resource group add ProxyAndIP virtual_ip1 virtual_ip2 squid
# pcs resource meta ProxyAndIP target-role="Started"


20) Configuring the order of service to start first virtual IP1 then virtual IP2 and then Squid
# pcs constraint order virtual_ip1 then virtual_ip2 then squid


21) Restart all cluster services
# pcs cluster stop --all && sudo pcs cluster start --all
# crm_mon //monitoring the cluster

22) If everything works fine you will see as below in
# crm_mon

Stack: corosync
Current DC: node1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Mon Sep 3 10:46:45 2018
Last change: Mon Sep 3 08:51:23 2018 by root via cibadmin on node1

2 nodes configured
3 resources configured

Online: [ node1 node2 ]

Active resources:

Resource Group: ProxyAndIP
virtual_ip1 (ocf::heartbeat:IPaddr2): Started node1
virtual_ip2 (ocf::heartbeat:IPaddr2): Started node1
squid (ocf::heartbeat:Squid): Started node1


23) If you issue the systemctl status squid on both nodes you can see that service is failed but you can see that it started as parent and started the service on one kid as below

[root@node1 squid]# systemctl status squid
รข squid.service - Squid caching proxy
Loaded: loaded (/usr/lib/systemd/system/squid.service; enabled; vendor preset: disabled)
Active: failed (Result: signal) since Sun 2018-09-02 16:15:02 +04; 18h ago
Process: 14975 ExecStop=/usr/sbin/squid -k shutdown -f $SQUID_CONF (code=exited, status=0/SUCCESS)
Process: 8816 ExecStart=/usr/sbin/squid $SQUID_OPTS -f $SQUID_CONF (code=exited, status=0/SUCCESS)
Process: 8811 ExecStartPre=/usr/libexec/squid/cache_swap.sh (code=exited, status=0/SUCCESS)
Main PID: 8821 (code=killed, signal=KILL)
Sep 02 16:15:02 node1.example.com systemd[1]: Unit squid.service entered failed state.
Sep 02 16:15:02 node1.example.com systemd[1]: squid.service failed.


Additional commands and notes:
1. To destroy cluster and make any changes to virtual IP
- Do cluster destroy below:
# pcs cluster destroy
# systemctl restart pcsd
- Then do steps from step 9


2. Add in client proxy access
# export http_proxy=http://xx.xx.xx.89:3128
# export https_proxy=http://xx.xx.xx.89:3128
# export http_proxy=http://xx.xx.xx.142:3128
# export https_proxy=http://xx.xx.xx.142:3128
- Test with wget http://google.com


3. Squid proxy to redirect requests to external proxy.
- Add below in /etc/squid/squid.conf
  cache_peer proxy.example.com parent 80 0 no-query default
  never_direct allow all



4. Hide the headers and cache contents of client servers from external world
- Add below in /etc/squid/squid.conf
via off
forwarded_for delete
header_access From deny all
header_access Server deny all
header_access WWW-Authenticate deny all
header_access Link deny all
header_access Cache-Control deny all
header_access Proxy-Connection deny all
header_access X-Cache deny all
header_access X-Cache-Lookup deny all
header_access Via deny all
header_access Forwarded-For deny all
header_access X-Forwarded-For deny all
header_access Pragma deny all
header_access Keep-Alive deny all


5. Using multiple IPs for Proxy server
- Add below in /etc/squid/squid.conf
balance_on_multiple_ip on

6. Assign access controls to redirect range of IPs to use a particular proxy IP alone
    Say IP range 10.64.126.0/24 will be redirected to use only proxy IP xx.xx.xx.142.
    They are grouped to network1, network2, network3
- Add below in /etc/squid/squid.conf
acl network1 src 10.64.126.0/24
acl network2 src 10.64.62.0/24
acl network3 src 10.64.77.0/24
tcp_outgoing_address
xx.xx.xx.89 network3
tcp_outgoing_address
xx.xx.xx.142 network2
tcp_outgoing_address
xx.xx.xx.142 network1

7. Log the internet access logs from different client IPs and to which external proxy IP it is redirected to and which website they are accessing in human readable timestamp format. Log file will be /var/log/squid/access.log
- Add below in /etc/squid/squid.conf
logformat squid %[ui %[un [%tl] %>a %Ss/%03>Hs %Sh/%<a %mt "%rm %ru HTTP/%rv" %>Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log


 8. By default as per the above setup, it displays only one cluster IP which is active as like internet access is only from that IP. Hence to specify which proxy IP need to be displayed to external world can be decided by below parameters in squid conf file.
- Add below in /etc/squid/squid.conf
acl ip1 myip xx.xx.xx.89
tcp_outgoing_address
xx.xx.xx.89 ip1
acl ip2 myip
xx.xx.xx.142
tcp_outgoing_address
xx.xx.xx.142 ip2

Troubleshooting:
Check the firewall configuration
Make sure that selinux is properly configured
make sure you installed the net-tools installed
make sure squid is installed on both nodes and squid.conf is identical
make sure squid is listening on the right port
make sure squid is storing the pid file in the right location

Comments

Popular posts from this blog

Active Directory SSO login in Linux

Complete Oracle VM Upgrade from 3.3.3 to 3.4.2