Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Linux: Powerful Server Administration

You're reading from   Linux: Powerful Server Administration Recipes for CentOS 7, RHEL 7, and Ubuntu Server Administration

Arrow left icon
Product type Course
Published in Apr 2017
Publisher Packt
ISBN-13 9781788293778
Length 995 pages
Edition 1st Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Uday Sawant Uday Sawant
Author Profile Icon Uday Sawant
Uday Sawant
William Leemans William Leemans
Author Profile Icon William Leemans
William Leemans
Jonathan Hobson Jonathan Hobson
Author Profile Icon Jonathan Hobson
Jonathan Hobson
Oliver Pelz Oliver Pelz
Author Profile Icon Oliver Pelz
Oliver Pelz
Arrow right icon
View More author details
Toc

Chapter 2. Networking

In this chapter, we will cover the following recipes:

  • Connecting to a network with a static IP
  • Installing the DHCP server
  • Installing the DNS server
  • Hiding behind the proxy with squid
  • Being on time with NTP
  • Discussing load balancing with HAProxy
  • Tuning the TCP stack
  • Troubleshooting network connectivity
  • Securing remote access with OpenVPN
  • Securing a network with uncomplicated firewall
  • Securing against brute force attacks
  • Discussing Ubuntu security best practices

Introduction

When we are talking about server systems, networking is the first and most important factor. If you are using an Ubuntu server in a cloud or virtual machine, you generally don't notice the network settings, as they are already configured with various network protocols. However, as your infrastructure grows, managing and securing the network becomes the priority.

Networking can be thought of as an umbrella term for various activities that include network configurations, file sharing and network time management, firewall settings and network proxies, and many others. In this chapter, we will take a closer look at the various networking services that help us set up and effectively manage our networks, be it in the cloud or a local network in your office.

Connecting to a network with a static IP

When you install Ubuntu server, its network setting defaults to dynamic IP addressing, that is, the network management daemon in Ubuntu searches for a DHCP server on the connected network and configures the network with the IP address assigned by DHCP. Even when you start an instance in the cloud, the network is configured with dynamic addressing using the DHCP server setup by the cloud service provider. In this chapter, you will learn how to configure the network interface with static IP assignment.

Getting ready

You will need an Ubuntu server with access to the root account or an account with sudo privileges. If network configuration is a new thing for you, then it is recommended to try this on a local or virtual machine.

How to do it…

Follow these steps to connect to the network with a static IP:

  1. Get a list of available Ethernet interfaces using the following command:
    $ ifconfig -a | grep eth
    
    How to do it…
  2. Open /etc/network/interfaces and find the following lines:
    auto eth0
    iface eth0 inet dhcp
    
    How to do it…
  3. Change the preceding lines to add an IP address, net mask, and default gateway (replace samples with the respective values):
    auto eth0
    iface eth0 inet static
        address 192.168.1.100
        netmask 255.255.255.0
        gateway 192.168.1.1 
        dns-nameservers 192.168.1.45 192.168.1.46
    
  4. Restart the network service for the changes to take effect:
    $ sudo /etc/init.d/networking restart
    
  5. Try to ping a remote host to test the network connection:
    $ ping www.google.com
    
    How to do it…

How it works…

In this recipe, we have modified the network configuration from dynamic IP assignment to static assignment.

First, we got a list of all the available network interfaces with ifconfig -a. The -a option of ifconfig returns all the available network interfaces, even if they are disabled. With the help of the pipe (|) symbol, we have directed the output of ifconfig to the grep command. For now, we are interested with Ethernet ports only. The grep command will filter the received data and return only the lines that contain the eth character sequence:

  ubuntu@ubuntu:~$ ifconfig -a | grep eth
  eth0      Link encap:Ethernet  HWaddr 08:00:27:bb:a6:03

Here, eth0 means first Ethernet interface available on the server. After getting the name of the interface to configure, we will change the network settings for eth0 in interfaces file at /etc/network/interfaces. By default, eth0 is configured to query the DHCP server for an IP assignment. The eth0 line auto is used to automatically configure the eth0 interface at server startup. Without this line, you will need to enable the network interface after each reboot. You can enable the eth0 interface with the following command:

  $ sudo ifup eth0

Similarly, to disable a network interface, use the following command:

  $ sudo ifdown eth0

The second iface eth0 inet static line sets the network configuration to static assignment. After this line, we will add network settings, such as IP address, netmask, default gateway, and DNS servers.

After saving the changes, we need to restart the networking service for the changes to take effect. Alternatively, you can simply disable the network interface and enable it with ifdown and ifup commands.

There's more…

The steps in this recipe are used to configure the network changes permanently. If you need to change your network parameters temporarily, you can use the ifconfig and route commands as follows:

  1. Change the IP address and netmask, as follows:
    $ sudo ifconfig eth0 192.168.1.100 netmask 255.255.255.0
    
  2. Set the default gateway:
    $ sudo route add default gw 192.168.1.1 eth0
    
  3. Edit /etc/resolv.conf to add temporary name servers (DNS):
    nameserver 192.168.1.45
    nameserver 192.168.1.46
    
  4. To verify the changes, use the following command:
    $ ifconfig eth0
    $ route -n
    
  5. When you no longer need this configuration, you can easily reset it with the following command:
    $ ip addr flush eth0
    
  6. Alternatively, you can reboot your server to reset the temporary configuration.

IPv6 configuration

You may need to configure your Ubuntu server for IPv6 IP address. Version six IP addresses use a 128-bit address space and include hexadecimal characters. They are different from simple version four IP addresses that use a 32-bit addressing space. Ubuntu supports IPv6 addressing and can be easily configured with either DHCP or a static address. The following is an example of static configuration for IPv6:

iface eth0 inet6 static
address 2001:db8::xxxx:yyyy
gateway your_ipv6_gateway

See also

You can find more details about network configuration in the Ubuntu server guide:

Getting ready

You will need an Ubuntu server with access to the root account or an account with sudo privileges. If network configuration is a new thing for you, then it is recommended to try this on a local or virtual machine.

How to do it…

Follow these steps to connect to the network with a static IP:

  1. Get a list of available Ethernet interfaces using the following command:
    $ ifconfig -a | grep eth
    
    How to do it…
  2. Open /etc/network/interfaces and find the following lines:
    auto eth0
    iface eth0 inet dhcp
    
    How to do it…
  3. Change the preceding lines to add an IP address, net mask, and default gateway (replace samples with the respective values):
    auto eth0
    iface eth0 inet static
        address 192.168.1.100
        netmask 255.255.255.0
        gateway 192.168.1.1 
        dns-nameservers 192.168.1.45 192.168.1.46
    
  4. Restart the network service for the changes to take effect:
    $ sudo /etc/init.d/networking restart
    
  5. Try to ping a remote host to test the network connection:
    $ ping www.google.com
    
    How to do it…

How it works…

In this recipe, we have modified the network configuration from dynamic IP assignment to static assignment.

First, we got a list of all the available network interfaces with ifconfig -a. The -a option of ifconfig returns all the available network interfaces, even if they are disabled. With the help of the pipe (|) symbol, we have directed the output of ifconfig to the grep command. For now, we are interested with Ethernet ports only. The grep command will filter the received data and return only the lines that contain the eth character sequence:

  ubuntu@ubuntu:~$ ifconfig -a | grep eth
  eth0      Link encap:Ethernet  HWaddr 08:00:27:bb:a6:03

Here, eth0 means first Ethernet interface available on the server. After getting the name of the interface to configure, we will change the network settings for eth0 in interfaces file at /etc/network/interfaces. By default, eth0 is configured to query the DHCP server for an IP assignment. The eth0 line auto is used to automatically configure the eth0 interface at server startup. Without this line, you will need to enable the network interface after each reboot. You can enable the eth0 interface with the following command:

  $ sudo ifup eth0

Similarly, to disable a network interface, use the following command:

  $ sudo ifdown eth0

The second iface eth0 inet static line sets the network configuration to static assignment. After this line, we will add network settings, such as IP address, netmask, default gateway, and DNS servers.

After saving the changes, we need to restart the networking service for the changes to take effect. Alternatively, you can simply disable the network interface and enable it with ifdown and ifup commands.

There's more…

The steps in this recipe are used to configure the network changes permanently. If you need to change your network parameters temporarily, you can use the ifconfig and route commands as follows:

  1. Change the IP address and netmask, as follows:
    $ sudo ifconfig eth0 192.168.1.100 netmask 255.255.255.0
    
  2. Set the default gateway:
    $ sudo route add default gw 192.168.1.1 eth0
    
  3. Edit /etc/resolv.conf to add temporary name servers (DNS):
    nameserver 192.168.1.45
    nameserver 192.168.1.46
    
  4. To verify the changes, use the following command:
    $ ifconfig eth0
    $ route -n
    
  5. When you no longer need this configuration, you can easily reset it with the following command:
    $ ip addr flush eth0
    
  6. Alternatively, you can reboot your server to reset the temporary configuration.

IPv6 configuration

You may need to configure your Ubuntu server for IPv6 IP address. Version six IP addresses use a 128-bit address space and include hexadecimal characters. They are different from simple version four IP addresses that use a 32-bit addressing space. Ubuntu supports IPv6 addressing and can be easily configured with either DHCP or a static address. The following is an example of static configuration for IPv6:

iface eth0 inet6 static
address 2001:db8::xxxx:yyyy
gateway your_ipv6_gateway

See also

You can find more details about network configuration in the Ubuntu server guide:

How to do it…

Follow these steps to connect to the network with a static IP:

  1. Get a list of available Ethernet interfaces using the following command:
    $ ifconfig -a | grep eth
    
    How to do it…
  2. Open /etc/network/interfaces and find the following lines:
    auto eth0
    iface eth0 inet dhcp
    
    How to do it…
  3. Change the preceding lines to add an IP address, net mask, and default gateway (replace samples with the respective values):
    auto eth0
    iface eth0 inet static
        address 192.168.1.100
        netmask 255.255.255.0
        gateway 192.168.1.1 
        dns-nameservers 192.168.1.45 192.168.1.46
    
  4. Restart the network service for the changes to take effect:
    $ sudo /etc/init.d/networking restart
    
  5. Try to ping a remote host to test the network connection:
    $ ping www.google.com
    
    How to do it…

How it works…

In this recipe, we have modified the network configuration from dynamic IP assignment to static assignment.

First, we got a list of all the available network interfaces with ifconfig -a. The -a option of ifconfig returns all the available network interfaces, even if they are disabled. With the help of the pipe (|) symbol, we have directed the output of ifconfig to the grep command. For now, we are interested with Ethernet ports only. The grep command will filter the received data and return only the lines that contain the eth character sequence:

  ubuntu@ubuntu:~$ ifconfig -a | grep eth
  eth0      Link encap:Ethernet  HWaddr 08:00:27:bb:a6:03

Here, eth0 means first Ethernet interface available on the server. After getting the name of the interface to configure, we will change the network settings for eth0 in interfaces file at /etc/network/interfaces. By default, eth0 is configured to query the DHCP server for an IP assignment. The eth0 line auto is used to automatically configure the eth0 interface at server startup. Without this line, you will need to enable the network interface after each reboot. You can enable the eth0 interface with the following command:

  $ sudo ifup eth0

Similarly, to disable a network interface, use the following command:

  $ sudo ifdown eth0

The second iface eth0 inet static line sets the network configuration to static assignment. After this line, we will add network settings, such as IP address, netmask, default gateway, and DNS servers.

After saving the changes, we need to restart the networking service for the changes to take effect. Alternatively, you can simply disable the network interface and enable it with ifdown and ifup commands.

There's more…

The steps in this recipe are used to configure the network changes permanently. If you need to change your network parameters temporarily, you can use the ifconfig and route commands as follows:

  1. Change the IP address and netmask, as follows:
    $ sudo ifconfig eth0 192.168.1.100 netmask 255.255.255.0
    
  2. Set the default gateway:
    $ sudo route add default gw 192.168.1.1 eth0
    
  3. Edit /etc/resolv.conf to add temporary name servers (DNS):
    nameserver 192.168.1.45
    nameserver 192.168.1.46
    
  4. To verify the changes, use the following command:
    $ ifconfig eth0
    $ route -n
    
  5. When you no longer need this configuration, you can easily reset it with the following command:
    $ ip addr flush eth0
    
  6. Alternatively, you can reboot your server to reset the temporary configuration.

IPv6 configuration

You may need to configure your Ubuntu server for IPv6 IP address. Version six IP addresses use a 128-bit address space and include hexadecimal characters. They are different from simple version four IP addresses that use a 32-bit addressing space. Ubuntu supports IPv6 addressing and can be easily configured with either DHCP or a static address. The following is an example of static configuration for IPv6:

iface eth0 inet6 static
address 2001:db8::xxxx:yyyy
gateway your_ipv6_gateway

See also

You can find more details about network configuration in the Ubuntu server guide:

How it works…

In this recipe, we have modified the network configuration from dynamic IP assignment to static assignment.

First, we got a list of all the available network interfaces with ifconfig -a. The -a option of ifconfig returns all the available network interfaces, even if they are disabled. With the help of the pipe (|) symbol, we have directed the output of ifconfig to the grep command. For now, we are interested with Ethernet ports only. The grep command will filter the received data and return only the lines that contain the eth character sequence:

  ubuntu@ubuntu:~$ ifconfig -a | grep eth
  eth0      Link encap:Ethernet  HWaddr 08:00:27:bb:a6:03

Here, eth0 means first Ethernet interface available on the server. After getting the name of the interface to configure, we will change the network settings for eth0 in interfaces file at /etc/network/interfaces. By default, eth0 is configured to query the DHCP server for an IP assignment. The eth0 line auto is used to automatically configure the eth0 interface at server startup. Without this line, you will need to enable the network interface after each reboot. You can enable the eth0 interface with the following command:

  $ sudo ifup eth0

Similarly, to disable a network interface, use the following command:

  $ sudo ifdown eth0

The second iface eth0 inet static line sets the network configuration to static assignment. After this line, we will add network settings, such as IP address, netmask, default gateway, and DNS servers.

After saving the changes, we need to restart the networking service for the changes to take effect. Alternatively, you can simply disable the network interface and enable it with ifdown and ifup commands.

There's more…

The steps in this recipe are used to configure the network changes permanently. If you need to change your network parameters temporarily, you can use the ifconfig and route commands as follows:

  1. Change the IP address and netmask, as follows:
    $ sudo ifconfig eth0 192.168.1.100 netmask 255.255.255.0
    
  2. Set the default gateway:
    $ sudo route add default gw 192.168.1.1 eth0
    
  3. Edit /etc/resolv.conf to add temporary name servers (DNS):
    nameserver 192.168.1.45
    nameserver 192.168.1.46
    
  4. To verify the changes, use the following command:
    $ ifconfig eth0
    $ route -n
    
  5. When you no longer need this configuration, you can easily reset it with the following command:
    $ ip addr flush eth0
    
  6. Alternatively, you can reboot your server to reset the temporary configuration.

IPv6 configuration

You may need to configure your Ubuntu server for IPv6 IP address. Version six IP addresses use a 128-bit address space and include hexadecimal characters. They are different from simple version four IP addresses that use a 32-bit addressing space. Ubuntu supports IPv6 addressing and can be easily configured with either DHCP or a static address. The following is an example of static configuration for IPv6:

iface eth0 inet6 static
address 2001:db8::xxxx:yyyy
gateway your_ipv6_gateway

See also

You can find more details about network configuration in the Ubuntu server guide:

There's more…

The steps in this recipe are used to configure the network changes permanently. If you need to change your network parameters temporarily, you can use the ifconfig and route commands as follows:

  1. Change the IP address and netmask, as follows:
    $ sudo ifconfig eth0 192.168.1.100 netmask 255.255.255.0
    
  2. Set the default gateway:
    $ sudo route add default gw 192.168.1.1 eth0
    
  3. Edit /etc/resolv.conf to add temporary name servers (DNS):
    nameserver 192.168.1.45
    nameserver 192.168.1.46
    
  4. To verify the changes, use the following command:
    $ ifconfig eth0
    $ route -n
    
  5. When you no longer need this configuration, you can easily reset it with the following command:
    $ ip addr flush eth0
    
  6. Alternatively, you can reboot your server to reset the temporary configuration.

IPv6 configuration

You may need to configure your Ubuntu server for IPv6 IP address. Version six IP addresses use a 128-bit address space and include hexadecimal characters. They are different from simple version four IP addresses that use a 32-bit addressing space. Ubuntu supports IPv6 addressing and can be easily configured with either DHCP or a static address. The following is an example of static configuration for IPv6:

iface eth0 inet6 static
address 2001:db8::xxxx:yyyy
gateway your_ipv6_gateway

See also

You can find more details about network configuration in the Ubuntu server guide:

IPv6 configuration

You may need to configure your Ubuntu server for IPv6 IP address. Version six IP addresses use a 128-bit address space and include hexadecimal characters. They are different from simple version four IP addresses that use a 32-bit addressing space. Ubuntu supports IPv6 addressing and can be easily configured with either DHCP or a static address. The following is an example of static configuration for IPv6:

iface eth0 inet6 static
address 2001:db8::xxxx:yyyy
gateway your_ipv6_gateway
See also

You can find more details about network configuration in the Ubuntu server guide:

See also

You can find more details about network configuration in the Ubuntu server guide:

Installing the DHCP server

DHCP is a service used to automatically assign network configuration to client systems. DHCP can be used as a handy tool when you have a large pool of systems that needs to be configured for network settings. Plus, when you need to change the network configuration, say to update a DNS server, all you need to do is update the DHCP server and all the connected hosts will be reconfigured with new settings. Also, you get reliable IP address configuration that minimizes configuration errors and address conflicts. You can easily add a new host to the network without spending time on network planning.

DHCP is most commonly used to provide IP configuration settings, such as IP address, net mask, default gateway, and DNS servers. However, it can also be set to configure the time server and hostname on the client.

DHCP can be configured to use the following configuration methods:

  • Manual allocation: Here, the configuration settings are tied with the MAC address of the client's network card. The same settings are supplied each time the client makes a request with the same network card.
  • Dynamic allocation: This method specifies a range of IP addresses to be assigned to the clients. The server can dynamically assign IP configuration to the client on first come, first served basis. These settings are allocated for a specified time period called lease; after this period, the client needs to renegotiate with the server to keep using the same address. If the client leaves the network for a specified time, the configuration gets expired and returns to pool where it can be assigned to other clients. Lease time is a configurable option and it can be set to infinite.

Ubuntu comes pre-installed with the DHCP client, dhclient. The DHCP dhcpd server daemon can be installed while setting up an Ubuntu server or separately with the apt-get command.

Getting ready

Make sure that your DHCP host is configured with static IP address.

You will need an access to the root account or an account with sudo privileges.

How to do it…

Follow these steps to install a DHCP server:

  1. Install a DHCP server:
    $ sudo apt-get install isc-dhcp-server
    
  2. Open the DHCP configuration file:
    $ sudo nano -w /etc/dhcp/dhcpd.conf
    
  3. Change the default and max lease time if necessary:
    default-lease-time 600;
    max-lease-time 7200;
    
  4. Add the following lines at the end of the file (replace the IP address to match your network):
    subnet 192.168.1.0 netmask 255.255.255.0 {
      range 192.168.1.150 192.168.1.200;
      option routers 192.168.1.1;
      option domain-name-servers 192.168.1.2, 192.168.1.3;
      option domain-name "example.com";
    }
    
  5. Save the configuration file and exit with Ctrl + O and Ctrl + X.
  6. After changing the configuration file, restart dhcpd:
    $ sudo service isc-dhcp-server restart
    

How it works…

Here, we have installed the DHCP server with the isc-dhcp-server package. It is open source software that implements the DHCP protocol. ISC-DHCP supports both IPv4 and IPv6.

After the installation, we need to set the basic configuration to match our network settings. All dhcpd settings are listed in the /etc/dhcp/dhcpd.conf configuration file. In the sample settings listed earlier, we have configured a new network, 192.168.1.0. This will result in IP addresses ranging from 192.168.1.150 to 192.168.1.200 to be assigned to clients. The default lease time is set to 600 seconds with maximum bound of 7200 seconds. A client can ask for a specific time to a maximum lease period of 7200 seconds. Additionally, the DHCP server will provide a default gateway (routers) as well as default DNS servers.

If you have multiple network interfaces, you may need to change the interface that dhcpd should listen to. These settings are listed in /etc/default/isc-dhcp-server. You can set multiple interfaces to listen to; just specify the interface names, separated by a space, for example, INTERFACES="wlan0 eth0".

There's more…

You can reserve an IP address to be assigned to a specific device on network. Reservation ensures that a specified device is always assigned to the same IP address. To create a reservation, add the following lines to dhcpd.conf. It will assign IP 192.168.1.201 to the client with the 08:D2:1F:50:F0:6F MAC ID:

host Server1 {
  hardware ethernet 08:D2:1F:50:F0:6F;
  fixed-address 192.168.1.201;
}

Getting ready

Make sure that your DHCP host is configured with static IP address.

You will need an access to the root account or an account with sudo privileges.

How to do it…

Follow these steps to install a DHCP server:

  1. Install a DHCP server:
    $ sudo apt-get install isc-dhcp-server
    
  2. Open the DHCP configuration file:
    $ sudo nano -w /etc/dhcp/dhcpd.conf
    
  3. Change the default and max lease time if necessary:
    default-lease-time 600;
    max-lease-time 7200;
    
  4. Add the following lines at the end of the file (replace the IP address to match your network):
    subnet 192.168.1.0 netmask 255.255.255.0 {
      range 192.168.1.150 192.168.1.200;
      option routers 192.168.1.1;
      option domain-name-servers 192.168.1.2, 192.168.1.3;
      option domain-name "example.com";
    }
    
  5. Save the configuration file and exit with Ctrl + O and Ctrl + X.
  6. After changing the configuration file, restart dhcpd:
    $ sudo service isc-dhcp-server restart
    

How it works…

Here, we have installed the DHCP server with the isc-dhcp-server package. It is open source software that implements the DHCP protocol. ISC-DHCP supports both IPv4 and IPv6.

After the installation, we need to set the basic configuration to match our network settings. All dhcpd settings are listed in the /etc/dhcp/dhcpd.conf configuration file. In the sample settings listed earlier, we have configured a new network, 192.168.1.0. This will result in IP addresses ranging from 192.168.1.150 to 192.168.1.200 to be assigned to clients. The default lease time is set to 600 seconds with maximum bound of 7200 seconds. A client can ask for a specific time to a maximum lease period of 7200 seconds. Additionally, the DHCP server will provide a default gateway (routers) as well as default DNS servers.

If you have multiple network interfaces, you may need to change the interface that dhcpd should listen to. These settings are listed in /etc/default/isc-dhcp-server. You can set multiple interfaces to listen to; just specify the interface names, separated by a space, for example, INTERFACES="wlan0 eth0".

There's more…

You can reserve an IP address to be assigned to a specific device on network. Reservation ensures that a specified device is always assigned to the same IP address. To create a reservation, add the following lines to dhcpd.conf. It will assign IP 192.168.1.201 to the client with the 08:D2:1F:50:F0:6F MAC ID:

host Server1 {
  hardware ethernet 08:D2:1F:50:F0:6F;
  fixed-address 192.168.1.201;
}

How to do it…

Follow these steps to install a DHCP server:

  1. Install a DHCP server:
    $ sudo apt-get install isc-dhcp-server
    
  2. Open the DHCP configuration file:
    $ sudo nano -w /etc/dhcp/dhcpd.conf
    
  3. Change the default and max lease time if necessary:
    default-lease-time 600;
    max-lease-time 7200;
    
  4. Add the following lines at the end of the file (replace the IP address to match your network):
    subnet 192.168.1.0 netmask 255.255.255.0 {
      range 192.168.1.150 192.168.1.200;
      option routers 192.168.1.1;
      option domain-name-servers 192.168.1.2, 192.168.1.3;
      option domain-name "example.com";
    }
    
  5. Save the configuration file and exit with Ctrl + O and Ctrl + X.
  6. After changing the configuration file, restart dhcpd:
    $ sudo service isc-dhcp-server restart
    

How it works…

Here, we have installed the DHCP server with the isc-dhcp-server package. It is open source software that implements the DHCP protocol. ISC-DHCP supports both IPv4 and IPv6.

After the installation, we need to set the basic configuration to match our network settings. All dhcpd settings are listed in the /etc/dhcp/dhcpd.conf configuration file. In the sample settings listed earlier, we have configured a new network, 192.168.1.0. This will result in IP addresses ranging from 192.168.1.150 to 192.168.1.200 to be assigned to clients. The default lease time is set to 600 seconds with maximum bound of 7200 seconds. A client can ask for a specific time to a maximum lease period of 7200 seconds. Additionally, the DHCP server will provide a default gateway (routers) as well as default DNS servers.

If you have multiple network interfaces, you may need to change the interface that dhcpd should listen to. These settings are listed in /etc/default/isc-dhcp-server. You can set multiple interfaces to listen to; just specify the interface names, separated by a space, for example, INTERFACES="wlan0 eth0".

There's more…

You can reserve an IP address to be assigned to a specific device on network. Reservation ensures that a specified device is always assigned to the same IP address. To create a reservation, add the following lines to dhcpd.conf. It will assign IP 192.168.1.201 to the client with the 08:D2:1F:50:F0:6F MAC ID:

host Server1 {
  hardware ethernet 08:D2:1F:50:F0:6F;
  fixed-address 192.168.1.201;
}

How it works…

Here, we have installed the DHCP server with the isc-dhcp-server package. It is open source software that implements the DHCP protocol. ISC-DHCP supports both IPv4 and IPv6.

After the installation, we need to set the basic configuration to match our network settings. All dhcpd settings are listed in the /etc/dhcp/dhcpd.conf configuration file. In the sample settings listed earlier, we have configured a new network, 192.168.1.0. This will result in IP addresses ranging from 192.168.1.150 to 192.168.1.200 to be assigned to clients. The default lease time is set to 600 seconds with maximum bound of 7200 seconds. A client can ask for a specific time to a maximum lease period of 7200 seconds. Additionally, the DHCP server will provide a default gateway (routers) as well as default DNS servers.

If you have multiple network interfaces, you may need to change the interface that dhcpd should listen to. These settings are listed in /etc/default/isc-dhcp-server. You can set multiple interfaces to listen to; just specify the interface names, separated by a space, for example, INTERFACES="wlan0 eth0".

There's more…

You can reserve an IP address to be assigned to a specific device on network. Reservation ensures that a specified device is always assigned to the same IP address. To create a reservation, add the following lines to dhcpd.conf. It will assign IP 192.168.1.201 to the client with the 08:D2:1F:50:F0:6F MAC ID:

host Server1 {
  hardware ethernet 08:D2:1F:50:F0:6F;
  fixed-address 192.168.1.201;
}

There's more…

You can reserve an IP address to be assigned to a specific device on network. Reservation ensures that a specified device is always assigned to the same IP address. To create a reservation, add the following lines to dhcpd.conf. It will assign IP 192.168.1.201 to the client with the 08:D2:1F:50:F0:6F MAC ID:

host Server1 {
  hardware ethernet 08:D2:1F:50:F0:6F;
  fixed-address 192.168.1.201;
}

Installing the DNS server

DNS, also known as name server, is a service on the Internet that provides mapping between IP addresses and domain names and vice versa. DNS maintains a database of names and related IP addresses. When an application queries with a domain name, DNS responds with a mapped IP address. Applications can also ask for a domain name by providing an IP address.

DNS is quite a big topic, and an entire chapter can be written just on the DNS setup. This recipe assumes some basic understanding of the working of the DNS protocol. We will cover the installation of BIND, installation of DNS server application, configuration of BIND as a caching DNS, and setup of Primary Master and Secondary Master. We will also cover some best practices to secure your DNS server.

Getting ready

In this recipe, I will be using four servers. You can create virtual machines if you want to simply test the setup:

  1. ns1: Name server one/Primary Master
  2. ns2: Name server two/Secondary Master
  3. host1: Host system one
  4. host2: Host system two, optional
    • All servers should be configured in a private network. I have used the 10.0.2.0/24 network
    • We need root privileges on all servers

How to do it…

Install BIND and set up a caching name server through the following steps:

  1. On ns1, install BIND and dnsutils with the following command:
    $ sudo apt-get update
    $ sudo apt-get install bind9 dnsutils
    
  2. Open /etc/bind/named.conf.optoins, enable the forwarders section, and add your preferred DNS servers:
    forwarders {
        8.8.8.8;
        8.8.4.4;
    };
    
  3. Now restart BIND to apply a new configuration:
    $ sudo service bind9 restart
    
  4. Check whether the BIND server is up and running:
    $ dig -x 127.0.0.1
    
  5. You should get an output similar to the following code:
    ;; Query time: 1 msec
    ;; SERVER: 10.0.2.53#53(10.0.2.53)
    
  6. Use dig to external domain and check the query time:
    How to do it…
  7. Dig the same domain again and cross check the query time. It should be less than the first query:
    How to do it…

Set up Primary Master through the following steps:

  1. On the ns1 server, edit /etc/bind/named.conf.options and add the acl block above the options block:
    acl "local" {
        
    10.0.2.0/24;  # local network
    };
    
  2. Add the following lines under the options block:
    recursion yes;
    allow-recursion { local; };
    listen-on { 10.0.2.53; };  # ns1 IP address
    allow-transfer { none; };
    
  3. Open the /etc/bind/named.conf.local file to add forward and reverse zones:
    $ sudo nano /etc/bind/named.conf.local
    
  4. Add the forward zone:
    zone "example.com" {
        type master;
        file "/etc/bind/zones/db.example.com";
    };
    
  5. Add the reverse zone:
    zone "2.0.10.in-addr.arpa" {
        type master;
        file "/etc/bind/zones/db.10";
    };
    
  6. Create the zones directory under /etc/bind/:
    $ sudo mkdir /etc/bind/zones
    
  7. Create the forward zone file using the existing zone file, db.local, as a template:
    $ cd /etc/bind/
    $ sudo cp db.local  zones/db.example.com
    
  8. The default file should look similar to the following image:
    How to do it…
  9. Edit the SOA entry and replace localhost with FQDN of your server.
  10. Increment the serial number (you can use the current date time as the serial number, 201507071100)
  11. Remove entries for localhost, 127.0.0.1 and ::1.
  12. Add new records:
    ; name server - NS records
    @  IN  NS  ns.exmple.com
    ; name server A records 
    ns  IN  A 10.0.2.53
    ; local - A records
    host1  IN A  10.0.2.58
    
  13. Save the changes and exit the nano editor. The final file should look similar to the following image:
    How to do it…
  14. Now create the reverse zone file using /etc/bind/db.127 as a template:
    $ sudo cp db.127 zones/db.10
    
  15. The default file should look similar to the following screenshot:
    How to do it…
  16. Change the SOA record and increment the serial number.
  17. Remove NS and PTR records for localhost.
  18. Add NS, PTR, and host records:
    ; NS records
    @  IN  NS  ns.example.com
    ; PTR records
    53  IN  PTR  ns.example.com
    ; host records
    58  IN  PTR  host1.example.com
    
  19. Save the changes. The final file should look similar to the following image:
    How to do it…
  20. Check the configuration files for syntax errors. It should end with no output:
    $ sudo named-checkconf
    
  21. Check zone files for syntax errors:
    $ sudo named-checkzone example.com /etc/bind/zones/db.example.com
    
  22. If there are no errors, you should see an output similar to the following:
    zone example.com/IN: loaded serial 3
    OK
    
  23. Check the reverse zone file, zones/db.10:
    $ sudo named-checkzone example.com /etc/bind/zones/db.10
    
  24. If there are no errors, you should see output similar to the following:
    zone example.com/IN: loaded serial 3
    OK
    
  25. Now restart the DNS server bind:
    $ sudo service bind9 restart
    
  26. Log in to host2 and configure it to use ns.example.com as a DNS server. Add ns.example.com to /etc/resolve.conf on host2.
  27. Test forward lookup with the nslookup command:
    $ nslookup host1.example.com 
    
  28. You should see an output similar to following:
    $ nslookup host1.example.com
    Server: 10.0.2.53
    Address: 10.0.2.53#53
    Name: host1.example.com
    Address: 10.0.2.58
    
  29. Now test the reverse lookup:
    $ nslookup 10.0.2.58
    
  30. It should output something similar to the following:
    $ nslookup 10.0.2.58
    Server: 10.0.2.53
    Address: 10.0.2.53#53
    58.2.0.10.in-addr.arpa	name = host1.example.com
    

Set up Secondary Master through the following steps:

  1. First, allow zone transfer on Primary Master by setting the allow-transfer option in /etc/bind/named.conf.local:
    zone "example.com" {
        type master;
        file "/etc/bind/zones/db.example.com";
        allow-transfer { 10.0.2.54; };
    };
    zone "2.0.10.in-addr.arpa" {
        type master;
        file "/etc/bind/zones/db.10";
        allow-transfer { 10.0.2.54; };
    };
    

    Note

    A syntax check will throw errors if you miss semicolons.

  2. Restart BIND9 on Primary Master:
    $ sudo service bind9 restart
    
  3. On Secondary Master (ns2), install the BIND package.
  4. Edit /etc/bind/named.conf.local to add zone declarations as follows:
    zone "example.com" {
        type slave;
        file "db.example.com";
        masters { 10.0.2.53; };
    };
    zone "2.0.10.in-addr.arpa" {
        type slave;
        file "db.10";
        masters { 10.0.2.53; };
    };
    
  5. Save the changes made to named.conf.local.
  6. Restart the BIND server on Secondary Master:
    $ sudo service bind9 restart
    
  7. This will initiate the transfer of all zones configured on Primary Master. You can check the logs on Secondary Master at /var/log/syslog to verify the zone transfer.

Tip

A zone is transferred only if the serial number under the SOA section on Primary Master is greater than that of Secondary Master. Make sure that you increment the serial number after every change to the zone file.

How it works…

In the first section, we have installed the BIND server and enabled a simple caching DNS server. A caching server helps to reduce bandwidth and latency in name resolution. The server will try to resolve queries locally from the cache. If the entry is not available in the cache, the query will be forwarded to external DNS servers and the result will be cached.

In the second and third sections, we have set Primary Master and Secondary Master respectively. Primary Master is the first DNS server. Secondary Master will be used as an alternate server in case the Primary server becomes unavailable.

Under Primary Master, we have declared a forward zone and reverse zone for the example.com domain. The forward zone is declared with domain name as the identifier and contains the type and filename for the database file. On Primary Master, we have set type to master. The reverse zone is declared with similar attributes and uses part of an IP address as an identifier. As we are using a 24-bit network address (10.0.2.0/24), we have included the first three octets of the IP address in reverse order (2.0.10) for the reverse zone name.

Lastly, we have created zone files by using existing files as templates. Zone files are the actual database that contains records of the IP address mapped to FQDN and vice versa. It contains SOA record, A records, and NS records. An SOA record defines the domain for this zone; A records and AAAA records are used to map the hostname to the IP address.

When the DNS server receives a query for the example.com domain, it checks for zone files for that domain. After finding the zone file, the host part from the query will be used to find the actual IP address to be returned as a result for query. Similarly, when a query with an IP address is received, the DNS server will look for a reverse zone file matching with the queried IP address.

See also

Getting ready

In this recipe, I will be using four servers. You can create virtual machines if you want to simply test the setup:

  1. ns1: Name server one/Primary Master
  2. ns2: Name server two/Secondary Master
  3. host1: Host system one
  4. host2: Host system two, optional
    • All servers should be configured in a private network. I have used the 10.0.2.0/24 network
    • We need root privileges on all servers

How to do it…

Install BIND and set up a caching name server through the following steps:

  1. On ns1, install BIND and dnsutils with the following command:
    $ sudo apt-get update
    $ sudo apt-get install bind9 dnsutils
    
  2. Open /etc/bind/named.conf.optoins, enable the forwarders section, and add your preferred DNS servers:
    forwarders {
        8.8.8.8;
        8.8.4.4;
    };
    
  3. Now restart BIND to apply a new configuration:
    $ sudo service bind9 restart
    
  4. Check whether the BIND server is up and running:
    $ dig -x 127.0.0.1
    
  5. You should get an output similar to the following code:
    ;; Query time: 1 msec
    ;; SERVER: 10.0.2.53#53(10.0.2.53)
    
  6. Use dig to external domain and check the query time:
    How to do it…
  7. Dig the same domain again and cross check the query time. It should be less than the first query:
    How to do it…

Set up Primary Master through the following steps:

  1. On the ns1 server, edit /etc/bind/named.conf.options and add the acl block above the options block:
    acl "local" {
        
    10.0.2.0/24;  # local network
    };
    
  2. Add the following lines under the options block:
    recursion yes;
    allow-recursion { local; };
    listen-on { 10.0.2.53; };  # ns1 IP address
    allow-transfer { none; };
    
  3. Open the /etc/bind/named.conf.local file to add forward and reverse zones:
    $ sudo nano /etc/bind/named.conf.local
    
  4. Add the forward zone:
    zone "example.com" {
        type master;
        file "/etc/bind/zones/db.example.com";
    };
    
  5. Add the reverse zone:
    zone "2.0.10.in-addr.arpa" {
        type master;
        file "/etc/bind/zones/db.10";
    };
    
  6. Create the zones directory under /etc/bind/:
    $ sudo mkdir /etc/bind/zones
    
  7. Create the forward zone file using the existing zone file, db.local, as a template:
    $ cd /etc/bind/
    $ sudo cp db.local  zones/db.example.com
    
  8. The default file should look similar to the following image:
    How to do it…
  9. Edit the SOA entry and replace localhost with FQDN of your server.
  10. Increment the serial number (you can use the current date time as the serial number, 201507071100)
  11. Remove entries for localhost, 127.0.0.1 and ::1.
  12. Add new records:
    ; name server - NS records
    @  IN  NS  ns.exmple.com
    ; name server A records 
    ns  IN  A 10.0.2.53
    ; local - A records
    host1  IN A  10.0.2.58
    
  13. Save the changes and exit the nano editor. The final file should look similar to the following image:
    How to do it…
  14. Now create the reverse zone file using /etc/bind/db.127 as a template:
    $ sudo cp db.127 zones/db.10
    
  15. The default file should look similar to the following screenshot:
    How to do it…
  16. Change the SOA record and increment the serial number.
  17. Remove NS and PTR records for localhost.
  18. Add NS, PTR, and host records:
    ; NS records
    @  IN  NS  ns.example.com
    ; PTR records
    53  IN  PTR  ns.example.com
    ; host records
    58  IN  PTR  host1.example.com
    
  19. Save the changes. The final file should look similar to the following image:
    How to do it…
  20. Check the configuration files for syntax errors. It should end with no output:
    $ sudo named-checkconf
    
  21. Check zone files for syntax errors:
    $ sudo named-checkzone example.com /etc/bind/zones/db.example.com
    
  22. If there are no errors, you should see an output similar to the following:
    zone example.com/IN: loaded serial 3
    OK
    
  23. Check the reverse zone file, zones/db.10:
    $ sudo named-checkzone example.com /etc/bind/zones/db.10
    
  24. If there are no errors, you should see output similar to the following:
    zone example.com/IN: loaded serial 3
    OK
    
  25. Now restart the DNS server bind:
    $ sudo service bind9 restart
    
  26. Log in to host2 and configure it to use ns.example.com as a DNS server. Add ns.example.com to /etc/resolve.conf on host2.
  27. Test forward lookup with the nslookup command:
    $ nslookup host1.example.com 
    
  28. You should see an output similar to following:
    $ nslookup host1.example.com
    Server: 10.0.2.53
    Address: 10.0.2.53#53
    Name: host1.example.com
    Address: 10.0.2.58
    
  29. Now test the reverse lookup:
    $ nslookup 10.0.2.58
    
  30. It should output something similar to the following:
    $ nslookup 10.0.2.58
    Server: 10.0.2.53
    Address: 10.0.2.53#53
    58.2.0.10.in-addr.arpa	name = host1.example.com
    

Set up Secondary Master through the following steps:

  1. First, allow zone transfer on Primary Master by setting the allow-transfer option in /etc/bind/named.conf.local:
    zone "example.com" {
        type master;
        file "/etc/bind/zones/db.example.com";
        allow-transfer { 10.0.2.54; };
    };
    zone "2.0.10.in-addr.arpa" {
        type master;
        file "/etc/bind/zones/db.10";
        allow-transfer { 10.0.2.54; };
    };
    

    Note

    A syntax check will throw errors if you miss semicolons.

  2. Restart BIND9 on Primary Master:
    $ sudo service bind9 restart
    
  3. On Secondary Master (ns2), install the BIND package.
  4. Edit /etc/bind/named.conf.local to add zone declarations as follows:
    zone "example.com" {
        type slave;
        file "db.example.com";
        masters { 10.0.2.53; };
    };
    zone "2.0.10.in-addr.arpa" {
        type slave;
        file "db.10";
        masters { 10.0.2.53; };
    };
    
  5. Save the changes made to named.conf.local.
  6. Restart the BIND server on Secondary Master:
    $ sudo service bind9 restart
    
  7. This will initiate the transfer of all zones configured on Primary Master. You can check the logs on Secondary Master at /var/log/syslog to verify the zone transfer.

Tip

A zone is transferred only if the serial number under the SOA section on Primary Master is greater than that of Secondary Master. Make sure that you increment the serial number after every change to the zone file.

How it works…

In the first section, we have installed the BIND server and enabled a simple caching DNS server. A caching server helps to reduce bandwidth and latency in name resolution. The server will try to resolve queries locally from the cache. If the entry is not available in the cache, the query will be forwarded to external DNS servers and the result will be cached.

In the second and third sections, we have set Primary Master and Secondary Master respectively. Primary Master is the first DNS server. Secondary Master will be used as an alternate server in case the Primary server becomes unavailable.

Under Primary Master, we have declared a forward zone and reverse zone for the example.com domain. The forward zone is declared with domain name as the identifier and contains the type and filename for the database file. On Primary Master, we have set type to master. The reverse zone is declared with similar attributes and uses part of an IP address as an identifier. As we are using a 24-bit network address (10.0.2.0/24), we have included the first three octets of the IP address in reverse order (2.0.10) for the reverse zone name.

Lastly, we have created zone files by using existing files as templates. Zone files are the actual database that contains records of the IP address mapped to FQDN and vice versa. It contains SOA record, A records, and NS records. An SOA record defines the domain for this zone; A records and AAAA records are used to map the hostname to the IP address.

When the DNS server receives a query for the example.com domain, it checks for zone files for that domain. After finding the zone file, the host part from the query will be used to find the actual IP address to be returned as a result for query. Similarly, when a query with an IP address is received, the DNS server will look for a reverse zone file matching with the queried IP address.

See also

How to do it…

Install BIND and set up a caching name server through the following steps:

  1. On ns1, install BIND and dnsutils with the following command:
    $ sudo apt-get update
    $ sudo apt-get install bind9 dnsutils
    
  2. Open /etc/bind/named.conf.optoins, enable the forwarders section, and add your preferred DNS servers:
    forwarders {
        8.8.8.8;
        8.8.4.4;
    };
    
  3. Now restart BIND to apply a new configuration:
    $ sudo service bind9 restart
    
  4. Check whether the BIND server is up and running:
    $ dig -x 127.0.0.1
    
  5. You should get an output similar to the following code:
    ;; Query time: 1 msec
    ;; SERVER: 10.0.2.53#53(10.0.2.53)
    
  6. Use dig to external domain and check the query time:
    How to do it…
  7. Dig the same domain again and cross check the query time. It should be less than the first query:
    How to do it…

Set up Primary Master through the following steps:

  1. On the ns1 server, edit /etc/bind/named.conf.options and add the acl block above the options block:
    acl "local" {
        
    10.0.2.0/24;  # local network
    };
    
  2. Add the following lines under the options block:
    recursion yes;
    allow-recursion { local; };
    listen-on { 10.0.2.53; };  # ns1 IP address
    allow-transfer { none; };
    
  3. Open the /etc/bind/named.conf.local file to add forward and reverse zones:
    $ sudo nano /etc/bind/named.conf.local
    
  4. Add the forward zone:
    zone "example.com" {
        type master;
        file "/etc/bind/zones/db.example.com";
    };
    
  5. Add the reverse zone:
    zone "2.0.10.in-addr.arpa" {
        type master;
        file "/etc/bind/zones/db.10";
    };
    
  6. Create the zones directory under /etc/bind/:
    $ sudo mkdir /etc/bind/zones
    
  7. Create the forward zone file using the existing zone file, db.local, as a template:
    $ cd /etc/bind/
    $ sudo cp db.local  zones/db.example.com
    
  8. The default file should look similar to the following image:
    How to do it…
  9. Edit the SOA entry and replace localhost with FQDN of your server.
  10. Increment the serial number (you can use the current date time as the serial number, 201507071100)
  11. Remove entries for localhost, 127.0.0.1 and ::1.
  12. Add new records:
    ; name server - NS records
    @  IN  NS  ns.exmple.com
    ; name server A records 
    ns  IN  A 10.0.2.53
    ; local - A records
    host1  IN A  10.0.2.58
    
  13. Save the changes and exit the nano editor. The final file should look similar to the following image:
    How to do it…
  14. Now create the reverse zone file using /etc/bind/db.127 as a template:
    $ sudo cp db.127 zones/db.10
    
  15. The default file should look similar to the following screenshot:
    How to do it…
  16. Change the SOA record and increment the serial number.
  17. Remove NS and PTR records for localhost.
  18. Add NS, PTR, and host records:
    ; NS records
    @  IN  NS  ns.example.com
    ; PTR records
    53  IN  PTR  ns.example.com
    ; host records
    58  IN  PTR  host1.example.com
    
  19. Save the changes. The final file should look similar to the following image:
    How to do it…
  20. Check the configuration files for syntax errors. It should end with no output:
    $ sudo named-checkconf
    
  21. Check zone files for syntax errors:
    $ sudo named-checkzone example.com /etc/bind/zones/db.example.com
    
  22. If there are no errors, you should see an output similar to the following:
    zone example.com/IN: loaded serial 3
    OK
    
  23. Check the reverse zone file, zones/db.10:
    $ sudo named-checkzone example.com /etc/bind/zones/db.10
    
  24. If there are no errors, you should see output similar to the following:
    zone example.com/IN: loaded serial 3
    OK
    
  25. Now restart the DNS server bind:
    $ sudo service bind9 restart
    
  26. Log in to host2 and configure it to use ns.example.com as a DNS server. Add ns.example.com to /etc/resolve.conf on host2.
  27. Test forward lookup with the nslookup command:
    $ nslookup host1.example.com 
    
  28. You should see an output similar to following:
    $ nslookup host1.example.com
    Server: 10.0.2.53
    Address: 10.0.2.53#53
    Name: host1.example.com
    Address: 10.0.2.58
    
  29. Now test the reverse lookup:
    $ nslookup 10.0.2.58
    
  30. It should output something similar to the following:
    $ nslookup 10.0.2.58
    Server: 10.0.2.53
    Address: 10.0.2.53#53
    58.2.0.10.in-addr.arpa	name = host1.example.com
    

Set up Secondary Master through the following steps:

  1. First, allow zone transfer on Primary Master by setting the allow-transfer option in /etc/bind/named.conf.local:
    zone "example.com" {
        type master;
        file "/etc/bind/zones/db.example.com";
        allow-transfer { 10.0.2.54; };
    };
    zone "2.0.10.in-addr.arpa" {
        type master;
        file "/etc/bind/zones/db.10";
        allow-transfer { 10.0.2.54; };
    };
    

    Note

    A syntax check will throw errors if you miss semicolons.

  2. Restart BIND9 on Primary Master:
    $ sudo service bind9 restart
    
  3. On Secondary Master (ns2), install the BIND package.
  4. Edit /etc/bind/named.conf.local to add zone declarations as follows:
    zone "example.com" {
        type slave;
        file "db.example.com";
        masters { 10.0.2.53; };
    };
    zone "2.0.10.in-addr.arpa" {
        type slave;
        file "db.10";
        masters { 10.0.2.53; };
    };
    
  5. Save the changes made to named.conf.local.
  6. Restart the BIND server on Secondary Master:
    $ sudo service bind9 restart
    
  7. This will initiate the transfer of all zones configured on Primary Master. You can check the logs on Secondary Master at /var/log/syslog to verify the zone transfer.

Tip

A zone is transferred only if the serial number under the SOA section on Primary Master is greater than that of Secondary Master. Make sure that you increment the serial number after every change to the zone file.

How it works…

In the first section, we have installed the BIND server and enabled a simple caching DNS server. A caching server helps to reduce bandwidth and latency in name resolution. The server will try to resolve queries locally from the cache. If the entry is not available in the cache, the query will be forwarded to external DNS servers and the result will be cached.

In the second and third sections, we have set Primary Master and Secondary Master respectively. Primary Master is the first DNS server. Secondary Master will be used as an alternate server in case the Primary server becomes unavailable.

Under Primary Master, we have declared a forward zone and reverse zone for the example.com domain. The forward zone is declared with domain name as the identifier and contains the type and filename for the database file. On Primary Master, we have set type to master. The reverse zone is declared with similar attributes and uses part of an IP address as an identifier. As we are using a 24-bit network address (10.0.2.0/24), we have included the first three octets of the IP address in reverse order (2.0.10) for the reverse zone name.

Lastly, we have created zone files by using existing files as templates. Zone files are the actual database that contains records of the IP address mapped to FQDN and vice versa. It contains SOA record, A records, and NS records. An SOA record defines the domain for this zone; A records and AAAA records are used to map the hostname to the IP address.

When the DNS server receives a query for the example.com domain, it checks for zone files for that domain. After finding the zone file, the host part from the query will be used to find the actual IP address to be returned as a result for query. Similarly, when a query with an IP address is received, the DNS server will look for a reverse zone file matching with the queried IP address.

See also

How it works…

In the first section, we have installed the BIND server and enabled a simple caching DNS server. A caching server helps to reduce bandwidth and latency in name resolution. The server will try to resolve queries locally from the cache. If the entry is not available in the cache, the query will be forwarded to external DNS servers and the result will be cached.

In the second and third sections, we have set Primary Master and Secondary Master respectively. Primary Master is the first DNS server. Secondary Master will be used as an alternate server in case the Primary server becomes unavailable.

Under Primary Master, we have declared a forward zone and reverse zone for the example.com domain. The forward zone is declared with domain name as the identifier and contains the type and filename for the database file. On Primary Master, we have set type to master. The reverse zone is declared with similar attributes and uses part of an IP address as an identifier. As we are using a 24-bit network address (10.0.2.0/24), we have included the first three octets of the IP address in reverse order (2.0.10) for the reverse zone name.

Lastly, we have created zone files by using existing files as templates. Zone files are the actual database that contains records of the IP address mapped to FQDN and vice versa. It contains SOA record, A records, and NS records. An SOA record defines the domain for this zone; A records and AAAA records are used to map the hostname to the IP address.

When the DNS server receives a query for the example.com domain, it checks for zone files for that domain. After finding the zone file, the host part from the query will be used to find the actual IP address to be returned as a result for query. Similarly, when a query with an IP address is received, the DNS server will look for a reverse zone file matching with the queried IP address.

See also

See also

Hiding behind the proxy with squid

In this recipe, we will install and configure the squid proxy and caching server. The term proxy is generally combined with two different terms: one is forward proxy and the other is reverse proxy.

When we say proxy, it generally refers to forward proxy. A forward proxy acts as a gateway between a client's browser and the Internet, requesting the content on behalf of the client. This protects intranet clients by exposing the proxy as the only requester. A proxy can also be used as a filtering agent, imposing organizational policies. As all Internet requests go through the proxy server, the proxy can cache the response and return cached content when a similar request is found, thus saving bandwidth and time.

A reverse proxy is the exact opposite of a forward proxy. It protects internal servers from the outside world. A reverse proxy accepts requests from external clients and routes them to servers behind the proxy. External clients can see a single entity serving requests, but internally, it can be multiple servers working behind the proxy and sharing the load. More details about reverse proxies are covered in Chapter 3, Working with Web Servers.

In this recipe, we will discuss how to install a squid server. Squid is a well-known application in the forward proxy world and works well as a caching proxy. It supports HTTP, HTTPS, FTP, and other popular network protocols.

Getting ready

As always, you will need access to a root account or an account with sudo privileges.

How to do it…

Following are the steps to setup and configure Squid proxy:

  1. Squid is quite an old, mature, and commonly used piece of software. It is generally shipped as a default package with various Linux distributions. The Ubuntu package repository contains the necessary pre-compiled binaries, so the installation is as easy as two commands.
  2. First, update the apt cache and then install squid as follows:
    $ sudo apt-get update
    $ sudo apt-get install squid3
    
  3. Edit the /etc/squid3/squid.conf file:
    $ sudo nano /etc/squid3/squid.conf
    
  4. Ensure that the cache_dir directive is not commented out:
    cache_dir ufs /var/spool/squid3 100 16 256
    
  5. Optionally, change the http_port directive to your desired TCP port:
    http_port 8080
    
  6. Optionally, change the squid hostname:
    visible_hostname proxy1
    
  7. Save changes with Ctrl + O and exit with Ctrl + X.
  8. Restart the squid server:
    $ sudo service squid3 restart
    
  9. Make sure that you have allowed the selected http_port on firewall.
  10. Next, configure your browser using the squid server as the http/https proxy.

How it works…

Squid is available as a package in the Ubuntu repository, so you can directly install it with the apt-get install squid command. After installing squid, we need to edit the squid.conf file for some basic settings. The squid.conf file is quite a big file and you can find a large number of directives listed with their explanation. It is recommended to create a copy of the original configuration file as a reference before you do any modifications.

In our example, we are changing the port squid listens on. The default port is 3128. This is just a security precaution and it's fine if you want to run squid on the default port. Secondly, we have changed the hostname for squid.

Other important directive to look at is cache_dir. Make sure that this directive is enabled, and also set the cache size. The following example sets cache_dir to /var/spool/suid3 with the size set to 100MB:

cache_dir ufs /var/spool/squid3 100  16  256

To check the cache utilization, use the following command:

$ sudo du /var/spool/squid3

There's more…

Squid provides lot more features than a simple proxy server. Following is a quick list of some important features:

Access control list

With squid ACLs, you can set the list of IP addresses allowed to use squid. Add the following line at the bottom of the acl section of /etc/squid3/squid.conf:

acl developers  src 192.168.2.0/24

Then, add the following line at the top of the http_access section in the same file:

http_access allow developers

Set cache refresh rules

You can change squid's caching behavior depending on the file types. Add the following line to cache all image files to be cached—the minimum time is an hour and the maximum is a day:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$  3600   90%   86400

This line uses a regular expression to find the file names that end with any of the listed file extensions (gif, png, and etc)

Sarg – tool to analyze squid logs

Squid Analysis Report Generator is an open source tool to monitor the squid server usages. It parses the logs generated by Squid and converts them to easy-to-digest HTML-based reports. You can track various metrics such as bandwidth used per user, top sites, downloads, and so on. Sarg can be quickly installed with the following command:

$ sudo apt-get install sarg

The configuration file for Sarg is located at /etc/squid/sarg.conf. Once installed, set the output_dir path and run sarg. You can also set cron jobs to execute sarg periodically. The generated reports are stored in output_dir and can be accessed with the help of a web server.

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

Getting ready

As always, you will need access to a root account or an account with sudo privileges.

How to do it…

Following are the steps to setup and configure Squid proxy:

  1. Squid is quite an old, mature, and commonly used piece of software. It is generally shipped as a default package with various Linux distributions. The Ubuntu package repository contains the necessary pre-compiled binaries, so the installation is as easy as two commands.
  2. First, update the apt cache and then install squid as follows:
    $ sudo apt-get update
    $ sudo apt-get install squid3
    
  3. Edit the /etc/squid3/squid.conf file:
    $ sudo nano /etc/squid3/squid.conf
    
  4. Ensure that the cache_dir directive is not commented out:
    cache_dir ufs /var/spool/squid3 100 16 256
    
  5. Optionally, change the http_port directive to your desired TCP port:
    http_port 8080
    
  6. Optionally, change the squid hostname:
    visible_hostname proxy1
    
  7. Save changes with Ctrl + O and exit with Ctrl + X.
  8. Restart the squid server:
    $ sudo service squid3 restart
    
  9. Make sure that you have allowed the selected http_port on firewall.
  10. Next, configure your browser using the squid server as the http/https proxy.

How it works…

Squid is available as a package in the Ubuntu repository, so you can directly install it with the apt-get install squid command. After installing squid, we need to edit the squid.conf file for some basic settings. The squid.conf file is quite a big file and you can find a large number of directives listed with their explanation. It is recommended to create a copy of the original configuration file as a reference before you do any modifications.

In our example, we are changing the port squid listens on. The default port is 3128. This is just a security precaution and it's fine if you want to run squid on the default port. Secondly, we have changed the hostname for squid.

Other important directive to look at is cache_dir. Make sure that this directive is enabled, and also set the cache size. The following example sets cache_dir to /var/spool/suid3 with the size set to 100MB:

cache_dir ufs /var/spool/squid3 100  16  256

To check the cache utilization, use the following command:

$ sudo du /var/spool/squid3

There's more…

Squid provides lot more features than a simple proxy server. Following is a quick list of some important features:

Access control list

With squid ACLs, you can set the list of IP addresses allowed to use squid. Add the following line at the bottom of the acl section of /etc/squid3/squid.conf:

acl developers  src 192.168.2.0/24

Then, add the following line at the top of the http_access section in the same file:

http_access allow developers

Set cache refresh rules

You can change squid's caching behavior depending on the file types. Add the following line to cache all image files to be cached—the minimum time is an hour and the maximum is a day:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$  3600   90%   86400

This line uses a regular expression to find the file names that end with any of the listed file extensions (gif, png, and etc)

Sarg – tool to analyze squid logs

Squid Analysis Report Generator is an open source tool to monitor the squid server usages. It parses the logs generated by Squid and converts them to easy-to-digest HTML-based reports. You can track various metrics such as bandwidth used per user, top sites, downloads, and so on. Sarg can be quickly installed with the following command:

$ sudo apt-get install sarg

The configuration file for Sarg is located at /etc/squid/sarg.conf. Once installed, set the output_dir path and run sarg. You can also set cron jobs to execute sarg periodically. The generated reports are stored in output_dir and can be accessed with the help of a web server.

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

How to do it…

Following are the steps to setup and configure Squid proxy:

  1. Squid is quite an old, mature, and commonly used piece of software. It is generally shipped as a default package with various Linux distributions. The Ubuntu package repository contains the necessary pre-compiled binaries, so the installation is as easy as two commands.
  2. First, update the apt cache and then install squid as follows:
    $ sudo apt-get update
    $ sudo apt-get install squid3
    
  3. Edit the /etc/squid3/squid.conf file:
    $ sudo nano /etc/squid3/squid.conf
    
  4. Ensure that the cache_dir directive is not commented out:
    cache_dir ufs /var/spool/squid3 100 16 256
    
  5. Optionally, change the http_port directive to your desired TCP port:
    http_port 8080
    
  6. Optionally, change the squid hostname:
    visible_hostname proxy1
    
  7. Save changes with Ctrl + O and exit with Ctrl + X.
  8. Restart the squid server:
    $ sudo service squid3 restart
    
  9. Make sure that you have allowed the selected http_port on firewall.
  10. Next, configure your browser using the squid server as the http/https proxy.

How it works…

Squid is available as a package in the Ubuntu repository, so you can directly install it with the apt-get install squid command. After installing squid, we need to edit the squid.conf file for some basic settings. The squid.conf file is quite a big file and you can find a large number of directives listed with their explanation. It is recommended to create a copy of the original configuration file as a reference before you do any modifications.

In our example, we are changing the port squid listens on. The default port is 3128. This is just a security precaution and it's fine if you want to run squid on the default port. Secondly, we have changed the hostname for squid.

Other important directive to look at is cache_dir. Make sure that this directive is enabled, and also set the cache size. The following example sets cache_dir to /var/spool/suid3 with the size set to 100MB:

cache_dir ufs /var/spool/squid3 100  16  256

To check the cache utilization, use the following command:

$ sudo du /var/spool/squid3

There's more…

Squid provides lot more features than a simple proxy server. Following is a quick list of some important features:

Access control list

With squid ACLs, you can set the list of IP addresses allowed to use squid. Add the following line at the bottom of the acl section of /etc/squid3/squid.conf:

acl developers  src 192.168.2.0/24

Then, add the following line at the top of the http_access section in the same file:

http_access allow developers

Set cache refresh rules

You can change squid's caching behavior depending on the file types. Add the following line to cache all image files to be cached—the minimum time is an hour and the maximum is a day:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$  3600   90%   86400

This line uses a regular expression to find the file names that end with any of the listed file extensions (gif, png, and etc)

Sarg – tool to analyze squid logs

Squid Analysis Report Generator is an open source tool to monitor the squid server usages. It parses the logs generated by Squid and converts them to easy-to-digest HTML-based reports. You can track various metrics such as bandwidth used per user, top sites, downloads, and so on. Sarg can be quickly installed with the following command:

$ sudo apt-get install sarg

The configuration file for Sarg is located at /etc/squid/sarg.conf. Once installed, set the output_dir path and run sarg. You can also set cron jobs to execute sarg periodically. The generated reports are stored in output_dir and can be accessed with the help of a web server.

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

How it works…

Squid is available as a package in the Ubuntu repository, so you can directly install it with the apt-get install squid command. After installing squid, we need to edit the squid.conf file for some basic settings. The squid.conf file is quite a big file and you can find a large number of directives listed with their explanation. It is recommended to create a copy of the original configuration file as a reference before you do any modifications.

In our example, we are changing the port squid listens on. The default port is 3128. This is just a security precaution and it's fine if you want to run squid on the default port. Secondly, we have changed the hostname for squid.

Other important directive to look at is cache_dir. Make sure that this directive is enabled, and also set the cache size. The following example sets cache_dir to /var/spool/suid3 with the size set to 100MB:

cache_dir ufs /var/spool/squid3 100  16  256

To check the cache utilization, use the following command:

$ sudo du /var/spool/squid3

There's more…

Squid provides lot more features than a simple proxy server. Following is a quick list of some important features:

Access control list

With squid ACLs, you can set the list of IP addresses allowed to use squid. Add the following line at the bottom of the acl section of /etc/squid3/squid.conf:

acl developers  src 192.168.2.0/24

Then, add the following line at the top of the http_access section in the same file:

http_access allow developers

Set cache refresh rules

You can change squid's caching behavior depending on the file types. Add the following line to cache all image files to be cached—the minimum time is an hour and the maximum is a day:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$  3600   90%   86400

This line uses a regular expression to find the file names that end with any of the listed file extensions (gif, png, and etc)

Sarg – tool to analyze squid logs

Squid Analysis Report Generator is an open source tool to monitor the squid server usages. It parses the logs generated by Squid and converts them to easy-to-digest HTML-based reports. You can track various metrics such as bandwidth used per user, top sites, downloads, and so on. Sarg can be quickly installed with the following command:

$ sudo apt-get install sarg

The configuration file for Sarg is located at /etc/squid/sarg.conf. Once installed, set the output_dir path and run sarg. You can also set cron jobs to execute sarg periodically. The generated reports are stored in output_dir and can be accessed with the help of a web server.

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

There's more…

Squid provides lot more features than a simple proxy server. Following is a quick list of some important features:

Access control list

With squid ACLs, you can set the list of IP addresses allowed to use squid. Add the following line at the bottom of the acl section of /etc/squid3/squid.conf:

acl developers  src 192.168.2.0/24

Then, add the following line at the top of the http_access section in the same file:

http_access allow developers

Set cache refresh rules

You can change squid's caching behavior depending on the file types. Add the following line to cache all image files to be cached—the minimum time is an hour and the maximum is a day:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$  3600   90%   86400

This line uses a regular expression to find the file names that end with any of the listed file extensions (gif, png, and etc)

Sarg – tool to analyze squid logs

Squid Analysis Report Generator is an open source tool to monitor the squid server usages. It parses the logs generated by Squid and converts them to easy-to-digest HTML-based reports. You can track various metrics such as bandwidth used per user, top sites, downloads, and so on. Sarg can be quickly installed with the following command:

$ sudo apt-get install sarg

The configuration file for Sarg is located at /etc/squid/sarg.conf. Once installed, set the output_dir path and run sarg. You can also set cron jobs to execute sarg periodically. The generated reports are stored in output_dir and can be accessed with the help of a web server.

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

Access control list

With squid ACLs, you can set the list of IP addresses allowed to use squid. Add the following line at the bottom of the acl section of /etc/squid3/squid.conf:

acl developers  src 192.168.2.0/24

Then, add the following line at the top of the http_access section in the same file:

http_access allow developers

Set cache refresh rules

You can change squid's caching behavior depending on the file types. Add the following line to cache all image files to be cached—the minimum time is an hour and the maximum is a day:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$  3600   90%   86400

This line uses a regular expression to find the file names that end with any of the listed file extensions (gif, png, and etc)

Sarg – tool to analyze squid logs

Squid Analysis Report Generator is an open source tool to monitor the squid server usages. It parses the logs generated by Squid and converts them to easy-to-digest HTML-based reports. You can track various metrics such as bandwidth used per user, top sites, downloads, and so on. Sarg can be quickly installed with the following command:

$ sudo apt-get install sarg

The configuration file for Sarg is located at /etc/squid/sarg.conf. Once installed, set the output_dir path and run sarg. You can also set cron jobs to execute sarg periodically. The generated reports are stored in output_dir and can be accessed with the help of a web server.

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

Set cache refresh rules

You can change squid's caching behavior depending on the file types. Add the following line to cache all image files to be cached—the minimum time is an hour and the maximum is a day:

refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$  3600   90%   86400

This line uses a regular expression to find the file names that end with any of the listed file extensions (gif, png, and etc)

Sarg – tool to analyze squid logs

Squid Analysis Report Generator is an open source tool to monitor the squid server usages. It parses the logs generated by Squid and converts them to easy-to-digest HTML-based reports. You can track various metrics such as bandwidth used per user, top sites, downloads, and so on. Sarg can be quickly installed with the following command:

$ sudo apt-get install sarg

The configuration file for Sarg is located at /etc/squid/sarg.conf. Once installed, set the output_dir path and run sarg. You can also set cron jobs to execute sarg periodically. The generated reports are stored in output_dir and can be accessed with the help of a web server.

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

Sarg – tool to analyze squid logs

Squid Analysis Report Generator is an open source tool to monitor the squid server usages. It parses the logs generated by Squid and converts them to easy-to-digest HTML-based reports. You can track various metrics such as bandwidth used per user, top sites, downloads, and so on. Sarg can be quickly installed with the following command:

$ sudo apt-get install sarg

The configuration file for Sarg is located at /etc/squid/sarg.conf. Once installed, set the output_dir path and run sarg. You can also set cron jobs to execute sarg periodically. The generated reports are stored in output_dir and can be accessed with the help of a web server.

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

Squid guard

Squid guard is another useful plugin for squid server. It is generally used to block a list of websites so that these sites are inaccessible from the internal network. As always, it can also be installed with a single command, as follows:

$ sudo apt-get install squidguard

The configuration file is located at /etc/squid/squidGuard.conf.

See also

See also

Being on time with NTP

Network Time Protocol (NTP) is a TCP/IP protocol for synchronizing time over a network. Although Ubuntu has a built-in clock that is helpful for keeping track of local events, it may create issues when the server is connected over a network and provides time-critical services to the clients. This problem can be solved with the help of NTP time synchronization. NTP works by synchronizing time across all servers on the Internet.

NTP uses hierarchies of servers with top-level servers synchronizing time with atomic clocks. This hierarchy levels are known as stratum, and the level can range between 1 and 15, both inclusive. The highest stratum level is 1 and is determined by the accuracy of the clock the server synchronizes with. If a server synchronizes with other NTP server with stratum level 3, then the stratum level for this server is automatically set to 4.

Another time synchronization tool provided by Ubuntu is ntpdate, which comes preinstalled with Ubuntu. It executes once at boot time and synchronizes the local time with Ubuntu's NTP servers. The problem with ntpdate is that it matches server time with central time without considering the big drifts in local time, whereas the NTP daemon ntpd continuously adjusts the server time to match it with the reference clock. As mentioned in the ntpdate manual pages (man ntpdate), you can use ntpdate multiple times throughout a day to keep time drifts low and get more accurate results, but it does not match the accuracy and reliability provided by ntpd.

In this recipe, we will set up a standalone time server for an internal network. Our time server will synchronize its time with public time servers and provide a time service to internal NTP clients.

How to do it…

Following are the steps to install and configure NTP daemon:

  1. First, synchronize the server's time with any Internet time server using the ntpdate command:
    $ ntpdate -s ntp.ubuntu.com
    
  2. To install ntpd, enter the following command in the terminal:
    $ sudo apt-get install ntp
    
  3. Edit the /etc/ntp.conf NTP configuration file to add/remove external NTP servers:
    $ sudo nano /etc/ntp.conf
    
  4. Set a fallback NTP server:
    server ntp.ubuntu.com
    
  5. Block any external access to the server, comment the first restrict line, and add the following command:
    restrict default noquery notrust nomodify
    
  6. Allow the clients on local network to use the NTP service:
    restrict 192.168.1.0 mask 255.255.255.0
    
  7. Save changes with Ctrl + O and exit nano with Ctrl + X.
  8. Reload the NTP daemon with the following command:
    $ sudo service ntp restart
    

How it works…

Sometimes, the NTP daemon refuses to work if the time difference between local time and central time is too big. To avoid this problem, we have synchronized the local time and central time before installing ntpd. As ntpd and ntpdate both use the same UDP port, 123, the ntpdate command will not work when the ntpd service is in use.

Tip

Make sure that you have opened UDP port 123 on the firewall.

After installing the NTP server, you may want to set time servers to be used. The default configuration file contains time servers provided by Ubuntu. You can use the same default servers or simply comment the lines by adding # at the start of each line and add the servers of your choice. You can dig into http://www.pool.ntp.org to find time servers for your specific region. It is a good idea to provide multiple reference servers, as NTP can provide more accurate results after querying each of them.

Note

You can control polling intervals for each server with the minpoll and maxpoll parameters. The value is set in seconds to the power of two. minpoll defaults to 6 (2^6 = 64 sec) and maxpoll defaults to 10 (2^10 = 1024 sec).

Additionally, we have set a fallback server that can be used in case of network outage or any other problems when our server cannot communicate with external reference servers. You can also use a system clock as a fallback, which can be accessed at 127.127.1.0. Simply replace the fallback server with the following line to use a system clock as a fallback:

server 127.127.0.1

Lastly, we have set access control parameters to protect our server from external access. The default configuration is to allow anyone to use the time service from this server. By changing the first restrict line, we blocked all external access to the server. The configuration already contains the exception to local NTP service indicated by the following:

restrict 127.0.0.1

We created another exception by adding a separate line to allow access to the clients on local network (remember to replace the IP range with your network details):

restrict 192.168.1.0 mask 255.255.255.0

There's more…

A central DHCP server can be configured to provide NTP settings to all DHCP clients. For this to work, your clients should also be configured to query NTP details from DHCP. A DHCP client configuration on Ubuntu already contains the query for network time servers.

Add the following line to your DHCP configuration to provide NTP details to the clients:

subnet 192.168.1.0 netmask 255.255.255.0 {
    ...
    option ntp-servers  your_ntp_host;
}

On the clientside, make sure that your dhclient.conf contains ntp-servers in its default request:

request subnet-mask, broadcast-address, time-offset, routers,
        ...
        rfc3442-classless-static-routes, ntp-servers,

See also

  • Check the default /etc/ntp.conf configuration file. It contains a short explanation for each setting.
  • Check the manual pages for ntpd with man ntpd.

How to do it…

Following are the steps to install and configure NTP daemon:

  1. First, synchronize the server's time with any Internet time server using the ntpdate command:
    $ ntpdate -s ntp.ubuntu.com
    
  2. To install ntpd, enter the following command in the terminal:
    $ sudo apt-get install ntp
    
  3. Edit the /etc/ntp.conf NTP configuration file to add/remove external NTP servers:
    $ sudo nano /etc/ntp.conf
    
  4. Set a fallback NTP server:
    server ntp.ubuntu.com
    
  5. Block any external access to the server, comment the first restrict line, and add the following command:
    restrict default noquery notrust nomodify
    
  6. Allow the clients on local network to use the NTP service:
    restrict 192.168.1.0 mask 255.255.255.0
    
  7. Save changes with Ctrl + O and exit nano with Ctrl + X.
  8. Reload the NTP daemon with the following command:
    $ sudo service ntp restart
    

How it works…

Sometimes, the NTP daemon refuses to work if the time difference between local time and central time is too big. To avoid this problem, we have synchronized the local time and central time before installing ntpd. As ntpd and ntpdate both use the same UDP port, 123, the ntpdate command will not work when the ntpd service is in use.

Tip

Make sure that you have opened UDP port 123 on the firewall.

After installing the NTP server, you may want to set time servers to be used. The default configuration file contains time servers provided by Ubuntu. You can use the same default servers or simply comment the lines by adding # at the start of each line and add the servers of your choice. You can dig into http://www.pool.ntp.org to find time servers for your specific region. It is a good idea to provide multiple reference servers, as NTP can provide more accurate results after querying each of them.

Note

You can control polling intervals for each server with the minpoll and maxpoll parameters. The value is set in seconds to the power of two. minpoll defaults to 6 (2^6 = 64 sec) and maxpoll defaults to 10 (2^10 = 1024 sec).

Additionally, we have set a fallback server that can be used in case of network outage or any other problems when our server cannot communicate with external reference servers. You can also use a system clock as a fallback, which can be accessed at 127.127.1.0. Simply replace the fallback server with the following line to use a system clock as a fallback:

server 127.127.0.1

Lastly, we have set access control parameters to protect our server from external access. The default configuration is to allow anyone to use the time service from this server. By changing the first restrict line, we blocked all external access to the server. The configuration already contains the exception to local NTP service indicated by the following:

restrict 127.0.0.1

We created another exception by adding a separate line to allow access to the clients on local network (remember to replace the IP range with your network details):

restrict 192.168.1.0 mask 255.255.255.0

There's more…

A central DHCP server can be configured to provide NTP settings to all DHCP clients. For this to work, your clients should also be configured to query NTP details from DHCP. A DHCP client configuration on Ubuntu already contains the query for network time servers.

Add the following line to your DHCP configuration to provide NTP details to the clients:

subnet 192.168.1.0 netmask 255.255.255.0 {
    ...
    option ntp-servers  your_ntp_host;
}

On the clientside, make sure that your dhclient.conf contains ntp-servers in its default request:

request subnet-mask, broadcast-address, time-offset, routers,
        ...
        rfc3442-classless-static-routes, ntp-servers,

See also

  • Check the default /etc/ntp.conf configuration file. It contains a short explanation for each setting.
  • Check the manual pages for ntpd with man ntpd.

How it works…

Sometimes, the NTP daemon refuses to work if the time difference between local time and central time is too big. To avoid this problem, we have synchronized the local time and central time before installing ntpd. As ntpd and ntpdate both use the same UDP port, 123, the ntpdate command will not work when the ntpd service is in use.

Tip

Make sure that you have opened UDP port 123 on the firewall.

After installing the NTP server, you may want to set time servers to be used. The default configuration file contains time servers provided by Ubuntu. You can use the same default servers or simply comment the lines by adding # at the start of each line and add the servers of your choice. You can dig into http://www.pool.ntp.org to find time servers for your specific region. It is a good idea to provide multiple reference servers, as NTP can provide more accurate results after querying each of them.

Note

You can control polling intervals for each server with the minpoll and maxpoll parameters. The value is set in seconds to the power of two. minpoll defaults to 6 (2^6 = 64 sec) and maxpoll defaults to 10 (2^10 = 1024 sec).

Additionally, we have set a fallback server that can be used in case of network outage or any other problems when our server cannot communicate with external reference servers. You can also use a system clock as a fallback, which can be accessed at 127.127.1.0. Simply replace the fallback server with the following line to use a system clock as a fallback:

server 127.127.0.1

Lastly, we have set access control parameters to protect our server from external access. The default configuration is to allow anyone to use the time service from this server. By changing the first restrict line, we blocked all external access to the server. The configuration already contains the exception to local NTP service indicated by the following:

restrict 127.0.0.1

We created another exception by adding a separate line to allow access to the clients on local network (remember to replace the IP range with your network details):

restrict 192.168.1.0 mask 255.255.255.0

There's more…

A central DHCP server can be configured to provide NTP settings to all DHCP clients. For this to work, your clients should also be configured to query NTP details from DHCP. A DHCP client configuration on Ubuntu already contains the query for network time servers.

Add the following line to your DHCP configuration to provide NTP details to the clients:

subnet 192.168.1.0 netmask 255.255.255.0 {
    ...
    option ntp-servers  your_ntp_host;
}

On the clientside, make sure that your dhclient.conf contains ntp-servers in its default request:

request subnet-mask, broadcast-address, time-offset, routers,
        ...
        rfc3442-classless-static-routes, ntp-servers,

See also

  • Check the default /etc/ntp.conf configuration file. It contains a short explanation for each setting.
  • Check the manual pages for ntpd with man ntpd.

There's more…

A central DHCP server can be configured to provide NTP settings to all DHCP clients. For this to work, your clients should also be configured to query NTP details from DHCP. A DHCP client configuration on Ubuntu already contains the query for network time servers.

Add the following line to your DHCP configuration to provide NTP details to the clients:

subnet 192.168.1.0 netmask 255.255.255.0 {
    ...
    option ntp-servers  your_ntp_host;
}

On the clientside, make sure that your dhclient.conf contains ntp-servers in its default request:

request subnet-mask, broadcast-address, time-offset, routers,
        ...
        rfc3442-classless-static-routes, ntp-servers,

See also

  • Check the default /etc/ntp.conf configuration file. It contains a short explanation for each setting.
  • Check the manual pages for ntpd with man ntpd.

See also

  • Check the default /etc/ntp.conf configuration file. It contains a short explanation for each setting.
  • Check the manual pages for ntpd with man ntpd.

Discussing load balancing with HAProxy

When an application becomes popular, it sends an increased number of requests to the application server. A single application server may not be able to handle the entire load alone. We can always scale up the underlying hardware, that is, add more memory and more powerful CUPs to increase the server capacity; but these improvements do not always scale linearly. To solve this problem, multiple replicas of the application server are created and the load is distributed among these replicas. Load balancing can be implemented at OSI Layer 4, that is, at TCP or UDP protocol levels, or at Layer 7, that is, application level with HTTP, SMTP, and DNS protocols.

In this recipe, we will install a popular load balancing or load distributing service, HAProxy. HAProxy receives all the requests from clients and directs them to the actual application server for processing. Application server directly returns the final results to the client. We will be setting HAProxy to load balance TCP connections.

Getting ready

You will need two or more application servers and one server for HAProxy:

  • You will need the root access on the server where you want to install HAProxy
  • It is assumed that your application servers are properly installed and working

How to do it…

Follow these steps to discus load balancing with HAProxy:

  1. Install HAProxy:
    $ sudo apt-get update
    $ sudo apt-get install haproxy
    
  2. Enable the HAProxy init script to automatically start HAProxy on system boot. Open /etc/default/haproxy and set ENABLE to 1:
    How to do it…
  3. Now, edit the HAProxy /etc/haproxy/haproxy.cfg configuration file. You may want to create a copy of this file before editing:
    $ cd /etc/haproxy
    $ sudo cp haproxy.cfg haproxy.cfg.copy
    $ sudo nano haproxy.cfg
    
  4. Find the defaults section and change the mode and option parameters to match the following:
    mode  tcp
    option  tcplog
    
    How to do it…
  5. Next, define frontend, which will receive all requests:
    frontend www
       bind 57.105.2.204:80    # haproxy public IP
       default_backend as-backend    # backend used
    
  6. Define backend application servers:
    backend as-backend
       balance leastconn
       mode tcp
       
    server as1 10.0.2.71:80 check    # application srv 1
       server as2 10.0.2.72:80 check    # application srv 2
    
  7. Save and quit the HAProxy configuration file.
  8. We need to set rsyslog to accept HAProxy logs. Open the rsyslog.conf file, /etc/rsyslog.conf, and uncomment following parameters:
    $ModLoad imudp
    $UDPServerRun 514
    
    How to do it…
  9. Next, create a new file under /etc/rsyslog.d to specify the HAProxy log location:
    $ sudo nano /etc/rsyslog.d/haproxy.conf
    
  10. Add the following line to the newly created file:
    local2.*  /var/log/haproxy.log
    
  11. Save the changes and exit the new file.
  12. Restart the rsyslog service:
    $ sudo service rsyslog restart
    
  13. Restart HAProxy:
    $ sudo service haproxy restart
    
  14. Now, you should be able to access your backend with the HAProxy IP address.

How it works…

Here, we have configured HAProxy as a frontend for a cluster of application servers. Under the frontend section, we have configured HAProxy to listen on the public IP of the HAProxy server. We also specified a backend for this frontend. Under the backend section, we have set a private IP address of the application servers. HAProxy will communicate with the application servers through a private network interface. This will help to keep the internal network latency to a minimum.

HAProxy supports various load balancing algorithms. Some of them are as follows:

  • Round-robin distributes the load in a round robin fashion. This is the default algorithm used.
  • leastconn selects the backend server with fewest connections.
  • source uses the hash of the client's IP address and maps it to the backend. This ensures that requests from a single user are served by the same backend server.

We have selected the leastconn algorithm, which is mentioned under the backend section with the balance leastconn line. The selection of a load balancing algorithm will depend on the type of application and length of connections.

Lastly, we configured rsyslog to accept logs over UDP. HAProxy does not provide separate logging system and passes logs to the system log daemon, rsyslog, over the UDP stream.

There's more …

Depending on your Ubuntu version, you may not get the latest version of HAProxy from the default apt repository. Use the following repository to install the latest release:

$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:vbernat/haproxy-1.6  # replace 1.6 with required version
$ sudo apt-get update && apt-get install haproxy

Getting ready

You will need two or more application servers and one server for HAProxy:

  • You will need the root access on the server where you want to install HAProxy
  • It is assumed that your application servers are properly installed and working

How to do it…

Follow these steps to discus load balancing with HAProxy:

  1. Install HAProxy:
    $ sudo apt-get update
    $ sudo apt-get install haproxy
    
  2. Enable the HAProxy init script to automatically start HAProxy on system boot. Open /etc/default/haproxy and set ENABLE to 1:
    How to do it…
  3. Now, edit the HAProxy /etc/haproxy/haproxy.cfg configuration file. You may want to create a copy of this file before editing:
    $ cd /etc/haproxy
    $ sudo cp haproxy.cfg haproxy.cfg.copy
    $ sudo nano haproxy.cfg
    
  4. Find the defaults section and change the mode and option parameters to match the following:
    mode  tcp
    option  tcplog
    
    How to do it…
  5. Next, define frontend, which will receive all requests:
    frontend www
       bind 57.105.2.204:80    # haproxy public IP
       default_backend as-backend    # backend used
    
  6. Define backend application servers:
    backend as-backend
       balance leastconn
       mode tcp
       
    server as1 10.0.2.71:80 check    # application srv 1
       server as2 10.0.2.72:80 check    # application srv 2
    
  7. Save and quit the HAProxy configuration file.
  8. We need to set rsyslog to accept HAProxy logs. Open the rsyslog.conf file, /etc/rsyslog.conf, and uncomment following parameters:
    $ModLoad imudp
    $UDPServerRun 514
    
    How to do it…
  9. Next, create a new file under /etc/rsyslog.d to specify the HAProxy log location:
    $ sudo nano /etc/rsyslog.d/haproxy.conf
    
  10. Add the following line to the newly created file:
    local2.*  /var/log/haproxy.log
    
  11. Save the changes and exit the new file.
  12. Restart the rsyslog service:
    $ sudo service rsyslog restart
    
  13. Restart HAProxy:
    $ sudo service haproxy restart
    
  14. Now, you should be able to access your backend with the HAProxy IP address.

How it works…

Here, we have configured HAProxy as a frontend for a cluster of application servers. Under the frontend section, we have configured HAProxy to listen on the public IP of the HAProxy server. We also specified a backend for this frontend. Under the backend section, we have set a private IP address of the application servers. HAProxy will communicate with the application servers through a private network interface. This will help to keep the internal network latency to a minimum.

HAProxy supports various load balancing algorithms. Some of them are as follows:

  • Round-robin distributes the load in a round robin fashion. This is the default algorithm used.
  • leastconn selects the backend server with fewest connections.
  • source uses the hash of the client's IP address and maps it to the backend. This ensures that requests from a single user are served by the same backend server.

We have selected the leastconn algorithm, which is mentioned under the backend section with the balance leastconn line. The selection of a load balancing algorithm will depend on the type of application and length of connections.

Lastly, we configured rsyslog to accept logs over UDP. HAProxy does not provide separate logging system and passes logs to the system log daemon, rsyslog, over the UDP stream.

There's more …

Depending on your Ubuntu version, you may not get the latest version of HAProxy from the default apt repository. Use the following repository to install the latest release:

$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:vbernat/haproxy-1.6  # replace 1.6 with required version
$ sudo apt-get update && apt-get install haproxy

How to do it…

Follow these steps to discus load balancing with HAProxy:

  1. Install HAProxy:
    $ sudo apt-get update
    $ sudo apt-get install haproxy
    
  2. Enable the HAProxy init script to automatically start HAProxy on system boot. Open /etc/default/haproxy and set ENABLE to 1:
    How to do it…
  3. Now, edit the HAProxy /etc/haproxy/haproxy.cfg configuration file. You may want to create a copy of this file before editing:
    $ cd /etc/haproxy
    $ sudo cp haproxy.cfg haproxy.cfg.copy
    $ sudo nano haproxy.cfg
    
  4. Find the defaults section and change the mode and option parameters to match the following:
    mode  tcp
    option  tcplog
    
    How to do it…
  5. Next, define frontend, which will receive all requests:
    frontend www
       bind 57.105.2.204:80    # haproxy public IP
       default_backend as-backend    # backend used
    
  6. Define backend application servers:
    backend as-backend
       balance leastconn
       mode tcp
       
    server as1 10.0.2.71:80 check    # application srv 1
       server as2 10.0.2.72:80 check    # application srv 2
    
  7. Save and quit the HAProxy configuration file.
  8. We need to set rsyslog to accept HAProxy logs. Open the rsyslog.conf file, /etc/rsyslog.conf, and uncomment following parameters:
    $ModLoad imudp
    $UDPServerRun 514
    
    How to do it…
  9. Next, create a new file under /etc/rsyslog.d to specify the HAProxy log location:
    $ sudo nano /etc/rsyslog.d/haproxy.conf
    
  10. Add the following line to the newly created file:
    local2.*  /var/log/haproxy.log
    
  11. Save the changes and exit the new file.
  12. Restart the rsyslog service:
    $ sudo service rsyslog restart
    
  13. Restart HAProxy:
    $ sudo service haproxy restart
    
  14. Now, you should be able to access your backend with the HAProxy IP address.

How it works…

Here, we have configured HAProxy as a frontend for a cluster of application servers. Under the frontend section, we have configured HAProxy to listen on the public IP of the HAProxy server. We also specified a backend for this frontend. Under the backend section, we have set a private IP address of the application servers. HAProxy will communicate with the application servers through a private network interface. This will help to keep the internal network latency to a minimum.

HAProxy supports various load balancing algorithms. Some of them are as follows:

  • Round-robin distributes the load in a round robin fashion. This is the default algorithm used.
  • leastconn selects the backend server with fewest connections.
  • source uses the hash of the client's IP address and maps it to the backend. This ensures that requests from a single user are served by the same backend server.

We have selected the leastconn algorithm, which is mentioned under the backend section with the balance leastconn line. The selection of a load balancing algorithm will depend on the type of application and length of connections.

Lastly, we configured rsyslog to accept logs over UDP. HAProxy does not provide separate logging system and passes logs to the system log daemon, rsyslog, over the UDP stream.

There's more …

Depending on your Ubuntu version, you may not get the latest version of HAProxy from the default apt repository. Use the following repository to install the latest release:

$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:vbernat/haproxy-1.6  # replace 1.6 with required version
$ sudo apt-get update && apt-get install haproxy

How it works…

Here, we have configured HAProxy as a frontend for a cluster of application servers. Under the frontend section, we have configured HAProxy to listen on the public IP of the HAProxy server. We also specified a backend for this frontend. Under the backend section, we have set a private IP address of the application servers. HAProxy will communicate with the application servers through a private network interface. This will help to keep the internal network latency to a minimum.

HAProxy supports various load balancing algorithms. Some of them are as follows:

  • Round-robin distributes the load in a round robin fashion. This is the default algorithm used.
  • leastconn selects the backend server with fewest connections.
  • source uses the hash of the client's IP address and maps it to the backend. This ensures that requests from a single user are served by the same backend server.

We have selected the leastconn algorithm, which is mentioned under the backend section with the balance leastconn line. The selection of a load balancing algorithm will depend on the type of application and length of connections.

Lastly, we configured rsyslog to accept logs over UDP. HAProxy does not provide separate logging system and passes logs to the system log daemon, rsyslog, over the UDP stream.

There's more …

Depending on your Ubuntu version, you may not get the latest version of HAProxy from the default apt repository. Use the following repository to install the latest release:

$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:vbernat/haproxy-1.6  # replace 1.6 with required version
$ sudo apt-get update && apt-get install haproxy

There's more …

Depending on your Ubuntu version, you may not get the latest version of HAProxy from the default apt repository. Use the following repository to install the latest release:

$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:vbernat/haproxy-1.6  # replace 1.6 with required version
$ sudo apt-get update && apt-get install haproxy

Tuning the TCP stack

Transmission Control Protocol and Internet Protocol (TCP/IP) is a standard set of protocols used by every network-enabled device. TCP/IP defines the standards to communicate over a network. TCP/IP is a set of protocols and is divided in two parts: TCP and IP. IP defines the rules for IP addressing and routing packets over network and provides an identity IP address to each host on the network. TCP deals with the interconnection between two hosts and enables them to exchange data over network. TCP is a connection-oriented protocol and controls the ordering of packets, retransmission, error detection, and other reliability tasks.

TCP stack is designed to be very general in nature so that it can be used by anyone for any network conditions. Servers use the same TCP/IP stack as used by their clients. For this reason, the default values are configured for general uses and not optimized for high-load server environments. New Linux kernel provides a tool called sysctl that can be used to modify kernel parameters at runtime without recompiling the entire kernel. We can use sysctl to modify and TCP/IP parameters to match our needs.

In this recipe, we will look at various kernel parameters that control the network. It is not required to modify all parameters listed here. You can choose ones that are required and suitable for your system and network environment.

It is advisable to test these modifications on local systems before doing any changes on live environment. A lot of these parameters directly deal with network connections and related CPU and memory uses. This can result in connection drops and/or sudden increases in resource use. Make sure that you have read the documentation for the parameter before you change anything.

Also, it is a good idea to set benchmarks before and after making any changes to sysctl parameters. This will give you a base to compare improvements, if any. Again, benchmarks may not reveal all the effects of parameter changes. Make sure that you have read the respective documentation.

Getting ready…

You will need root access.

Note down basic performance metrics with the tool of your choice.

How to do it…

Follow these steps to tune the TCP stack:

  1. Set the maximum open files limit:
    $ ulimit -n   # check existing limits for logged in user
    # ulimit -n 65535   # root change values above hard limits
    
  2. To permanently set limits for a user, open /etc/security/limits.conf and add the following lines at end of the file. Make sure to replace values in brackets, <>:
    <username>  soft  nofile  <value>     # soft limits
    <username>  hard  nofile  <value>     # hard limits
    
  3. Save limits.conf and exit. Then restart the user session.
  4. View all available parameters:
    # sysctl -a
    
  5. Set the TCP default read-write buffer:
    # echo 'net.core.rmem_default=65536' >> /etc/sysctl.conf
    # echo 'net.core.wmem_default=65536' >> /etc/sysctl.conf
    
  6. Set the TCP read and write buffers to 8 MB:
    # echo 'net.core.rmem_max=8388608' >> /etc/sysctl.conf
    # echo 'net.core.wmem_max=8388608' >> /etc/sysctl.conf
    
  7. Increase the maximum TCP orphans:
    # echo 'net.ipv4.tcp_max_orphans=4096' >> /etc/sysctl.conf
    
  8. Disable slow start after being idle:
    # echo 'net.ipv4.tcp_slow_start_after_idle=0' >> /etc/sysctl.conf
    
  9. Minimize TCP connection retries:
    # echo 'net.ipv4.tcp_synack_retries=3' >> /etc/sysctl.conf
    # echo 'net.ipv4.tcp_syn_retries =3' >> /etc/sysctl.conf
    
  10. Set the TCP window scaling:
    # echo 'net.ipv4.tcp_window_scaling=1' >> /etc/sysctl.conf
    
  11. Enable timestamps:
    # echo 'net.ipv4.tcp_timestamp=1' >> /etc/sysctl.conf
    
  12. Enable selective acknowledgements:
    # echo 'net.ipv4.tcp_sack=0' >> /etc/sysctl.conf
    
  13. Set the maximum number of times the IPV4 packet can be reordered in the TCP packet stream:
    # echo 'net.ipv4.tcp_reordering=3' >> /etc/sysctl.conf
    
  14. Send data in the opening SYN packet:
    # echo 'net.ipv4.tcp_fastopen=1'  >> /etc/sysctl.conf
    
  15. Set the number of opened connections to be remembered before receiving acknowledgement:
    # echo 'tcp_max_syn_backlog=1500' >> /etc/sysctl.conf
    
  16. Set the number of TCP keep-alive probes to send before deciding the connection is broken:
    # echo 'tcp_keepalive_probes=5' >> /etc/sysctl.conf
    
  17. Set the keep-alive time, which is a timeout value after the broken connection is killed:
    # echo 'tcp_keepalive_time=1800' >> /etc/sysctl.conf
    
  18. Set intervals to send keep-alive packets:
    # echo 'tcp_keepalive_intvl=60' >> /etc/sysctl.conf
    
  19. Set to reuse or recycle connections in the wait state:
    # echo 'net.ipv4.tcp_tw_reuse=1' >> /etc/sysctl.conf
    # echo 'net.ipv4.tcp_tw_recycle=1' >> /etc/sysctl.conf
    
  20. Increase the maximum number of connections:
    # echo 'net.ipv4.ip_local_port_range=32768 65535' >> /etc/sysctl.conf
    
  21. Set TCP FIN timeout:
    # echo 'tcp_fin_timeout=60' >> /etc/sysctl.conf
    

How it works…

The behavior of Linux kernel can be fine tuned with the help of various Linux kernel parameters. These are the options passed to the kernel in order to control various aspects of the system. These parameters can be passed while compiling the kernel, at boot time, or at runtime using the /proc filesystem and tools such as sysctl.

In this recipe, we have used sysctl to configure network-related kernel parameters to fine tune network settings. Again, you need to cross check each configuration to see if it's working as expected.

Along with network parameters, tons of other kernel parameters can be configured with the sysctl command. The -a flag to sysctl will list all the available parameters:

$ sysctl -a

All these configurations are stored in a filesystem at the /proc directory, grouped in their respective categories. You can directly read/write these files or use the sysctl command:

ubuntu@ubuntu:~$ sysctl fs.file-max
fs.file-max = 98869
ubuntu@ubuntu:~$ cat /proc/sys/fs/file-max
98869

Getting ready…

You will need root access.

Note down basic performance metrics with the tool of your choice.

How to do it…

Follow these steps to tune the TCP stack:

  1. Set the maximum open files limit:
    $ ulimit -n   # check existing limits for logged in user
    # ulimit -n 65535   # root change values above hard limits
    
  2. To permanently set limits for a user, open /etc/security/limits.conf and add the following lines at end of the file. Make sure to replace values in brackets, <>:
    <username>  soft  nofile  <value>     # soft limits
    <username>  hard  nofile  <value>     # hard limits
    
  3. Save limits.conf and exit. Then restart the user session.
  4. View all available parameters:
    # sysctl -a
    
  5. Set the TCP default read-write buffer:
    # echo 'net.core.rmem_default=65536' >> /etc/sysctl.conf
    # echo 'net.core.wmem_default=65536' >> /etc/sysctl.conf
    
  6. Set the TCP read and write buffers to 8 MB:
    # echo 'net.core.rmem_max=8388608' >> /etc/sysctl.conf
    # echo 'net.core.wmem_max=8388608' >> /etc/sysctl.conf
    
  7. Increase the maximum TCP orphans:
    # echo 'net.ipv4.tcp_max_orphans=4096' >> /etc/sysctl.conf
    
  8. Disable slow start after being idle:
    # echo 'net.ipv4.tcp_slow_start_after_idle=0' >> /etc/sysctl.conf
    
  9. Minimize TCP connection retries:
    # echo 'net.ipv4.tcp_synack_retries=3' >> /etc/sysctl.conf
    # echo 'net.ipv4.tcp_syn_retries =3' >> /etc/sysctl.conf
    
  10. Set the TCP window scaling:
    # echo 'net.ipv4.tcp_window_scaling=1' >> /etc/sysctl.conf
    
  11. Enable timestamps:
    # echo 'net.ipv4.tcp_timestamp=1' >> /etc/sysctl.conf
    
  12. Enable selective acknowledgements:
    # echo 'net.ipv4.tcp_sack=0' >> /etc/sysctl.conf
    
  13. Set the maximum number of times the IPV4 packet can be reordered in the TCP packet stream:
    # echo 'net.ipv4.tcp_reordering=3' >> /etc/sysctl.conf
    
  14. Send data in the opening SYN packet:
    # echo 'net.ipv4.tcp_fastopen=1'  >> /etc/sysctl.conf
    
  15. Set the number of opened connections to be remembered before receiving acknowledgement:
    # echo 'tcp_max_syn_backlog=1500' >> /etc/sysctl.conf
    
  16. Set the number of TCP keep-alive probes to send before deciding the connection is broken:
    # echo 'tcp_keepalive_probes=5' >> /etc/sysctl.conf
    
  17. Set the keep-alive time, which is a timeout value after the broken connection is killed:
    # echo 'tcp_keepalive_time=1800' >> /etc/sysctl.conf
    
  18. Set intervals to send keep-alive packets:
    # echo 'tcp_keepalive_intvl=60' >> /etc/sysctl.conf
    
  19. Set to reuse or recycle connections in the wait state:
    # echo 'net.ipv4.tcp_tw_reuse=1' >> /etc/sysctl.conf
    # echo 'net.ipv4.tcp_tw_recycle=1' >> /etc/sysctl.conf
    
  20. Increase the maximum number of connections:
    # echo 'net.ipv4.ip_local_port_range=32768 65535' >> /etc/sysctl.conf
    
  21. Set TCP FIN timeout:
    # echo 'tcp_fin_timeout=60' >> /etc/sysctl.conf
    

How it works…

The behavior of Linux kernel can be fine tuned with the help of various Linux kernel parameters. These are the options passed to the kernel in order to control various aspects of the system. These parameters can be passed while compiling the kernel, at boot time, or at runtime using the /proc filesystem and tools such as sysctl.

In this recipe, we have used sysctl to configure network-related kernel parameters to fine tune network settings. Again, you need to cross check each configuration to see if it's working as expected.

Along with network parameters, tons of other kernel parameters can be configured with the sysctl command. The -a flag to sysctl will list all the available parameters:

$ sysctl -a

All these configurations are stored in a filesystem at the /proc directory, grouped in their respective categories. You can directly read/write these files or use the sysctl command:

ubuntu@ubuntu:~$ sysctl fs.file-max
fs.file-max = 98869
ubuntu@ubuntu:~$ cat /proc/sys/fs/file-max
98869

How to do it…

Follow these steps to tune the TCP stack:

  1. Set the maximum open files limit:
    $ ulimit -n   # check existing limits for logged in user
    # ulimit -n 65535   # root change values above hard limits
    
  2. To permanently set limits for a user, open /etc/security/limits.conf and add the following lines at end of the file. Make sure to replace values in brackets, <>:
    <username>  soft  nofile  <value>     # soft limits
    <username>  hard  nofile  <value>     # hard limits
    
  3. Save limits.conf and exit. Then restart the user session.
  4. View all available parameters:
    # sysctl -a
    
  5. Set the TCP default read-write buffer:
    # echo 'net.core.rmem_default=65536' >> /etc/sysctl.conf
    # echo 'net.core.wmem_default=65536' >> /etc/sysctl.conf
    
  6. Set the TCP read and write buffers to 8 MB:
    # echo 'net.core.rmem_max=8388608' >> /etc/sysctl.conf
    # echo 'net.core.wmem_max=8388608' >> /etc/sysctl.conf
    
  7. Increase the maximum TCP orphans:
    # echo 'net.ipv4.tcp_max_orphans=4096' >> /etc/sysctl.conf
    
  8. Disable slow start after being idle:
    # echo 'net.ipv4.tcp_slow_start_after_idle=0' >> /etc/sysctl.conf
    
  9. Minimize TCP connection retries:
    # echo 'net.ipv4.tcp_synack_retries=3' >> /etc/sysctl.conf
    # echo 'net.ipv4.tcp_syn_retries =3' >> /etc/sysctl.conf
    
  10. Set the TCP window scaling:
    # echo 'net.ipv4.tcp_window_scaling=1' >> /etc/sysctl.conf
    
  11. Enable timestamps:
    # echo 'net.ipv4.tcp_timestamp=1' >> /etc/sysctl.conf
    
  12. Enable selective acknowledgements:
    # echo 'net.ipv4.tcp_sack=0' >> /etc/sysctl.conf
    
  13. Set the maximum number of times the IPV4 packet can be reordered in the TCP packet stream:
    # echo 'net.ipv4.tcp_reordering=3' >> /etc/sysctl.conf
    
  14. Send data in the opening SYN packet:
    # echo 'net.ipv4.tcp_fastopen=1'  >> /etc/sysctl.conf
    
  15. Set the number of opened connections to be remembered before receiving acknowledgement:
    # echo 'tcp_max_syn_backlog=1500' >> /etc/sysctl.conf
    
  16. Set the number of TCP keep-alive probes to send before deciding the connection is broken:
    # echo 'tcp_keepalive_probes=5' >> /etc/sysctl.conf
    
  17. Set the keep-alive time, which is a timeout value after the broken connection is killed:
    # echo 'tcp_keepalive_time=1800' >> /etc/sysctl.conf
    
  18. Set intervals to send keep-alive packets:
    # echo 'tcp_keepalive_intvl=60' >> /etc/sysctl.conf
    
  19. Set to reuse or recycle connections in the wait state:
    # echo 'net.ipv4.tcp_tw_reuse=1' >> /etc/sysctl.conf
    # echo 'net.ipv4.tcp_tw_recycle=1' >> /etc/sysctl.conf
    
  20. Increase the maximum number of connections:
    # echo 'net.ipv4.ip_local_port_range=32768 65535' >> /etc/sysctl.conf
    
  21. Set TCP FIN timeout:
    # echo 'tcp_fin_timeout=60' >> /etc/sysctl.conf
    

How it works…

The behavior of Linux kernel can be fine tuned with the help of various Linux kernel parameters. These are the options passed to the kernel in order to control various aspects of the system. These parameters can be passed while compiling the kernel, at boot time, or at runtime using the /proc filesystem and tools such as sysctl.

In this recipe, we have used sysctl to configure network-related kernel parameters to fine tune network settings. Again, you need to cross check each configuration to see if it's working as expected.

Along with network parameters, tons of other kernel parameters can be configured with the sysctl command. The -a flag to sysctl will list all the available parameters:

$ sysctl -a

All these configurations are stored in a filesystem at the /proc directory, grouped in their respective categories. You can directly read/write these files or use the sysctl command:

ubuntu@ubuntu:~$ sysctl fs.file-max
fs.file-max = 98869
ubuntu@ubuntu:~$ cat /proc/sys/fs/file-max
98869

How it works…

The behavior of Linux kernel can be fine tuned with the help of various Linux kernel parameters. These are the options passed to the kernel in order to control various aspects of the system. These parameters can be passed while compiling the kernel, at boot time, or at runtime using the /proc filesystem and tools such as sysctl.

In this recipe, we have used sysctl to configure network-related kernel parameters to fine tune network settings. Again, you need to cross check each configuration to see if it's working as expected.

Along with network parameters, tons of other kernel parameters can be configured with the sysctl command. The -a flag to sysctl will list all the available parameters:

$ sysctl -a

All these configurations are stored in a filesystem at the /proc directory, grouped in their respective categories. You can directly read/write these files or use the sysctl command:

ubuntu@ubuntu:~$ sysctl fs.file-max
fs.file-max = 98869
ubuntu@ubuntu:~$ cat /proc/sys/fs/file-max
98869

Troubleshooting network connectivity

Networking consists of various components and services working together to enable systems to communicate with each other. A lot of times it happens that everything seems good, but we are not able to access other servers or the Internet. In this recipe, we will look at some tools provided by Ubuntu to troubleshoot the network connectivity issues.

Getting ready

As you are reading this recipe, I am assuming that you are facing a networking issue. Also, I am assuming that the problems are with a primary network adapter, eth0.

You may need access to root account or account with similar privileges.

How to do it…

Follow these steps to troubleshoot network connectivity:

  1. Let's start with checking the network card. If it is working properly and is detected by Ubuntu. Check boot time logs and search for lines related to Ethernet, eth:
    $ dmesg | grep eth
    
    How to do it…
  2. If you don't find anything in the boot logs, then most probably, your network hardware is faulty or unsupported by Ubuntu.
  3. Next, check whether the network cable is plugged in and is working properly. You can simply check the LED indicators on the network card or use the following command:
    $ sudo mii-tool
    
    How to do it…
  4. If you can see a line with link ok, then you have a working Ethernet connection.
  5. Next, check whether a proper IP address is assigned to the eth0 Ethernet port:
    $ ifconfig eth0
    
    How to do it…
  6. Check whether you can find a line that starts with inet addr. If you cannot find this line or it is listed as inet addr 169.254, then you don't have an IP address assigned.
  7. Even if you see a line stating the IP address, make sure that it is valid for network that you are connected to.
  8. Now assuming that you have not assigned an IP address, let's try to get dynamic IP address from the DHCP server. Make sure that eth0 is set for dynamic configuration. You should see line similar to iface eth0 inet dhcp:
    $ cat /etc/network/interfaces
    
    How to do it…
  9. Execute the dhclient command to query the local DHCP server:
    $ sudo dhclient -v
    
    How to do it…
  10. If you can see a line similar to bound to 10.0.2.15, then you are assigned with a new IP address. If you keep getting DHCPDISCOVER messages, this means that your DHCP server is not accessible or not assigning an IP address to this client.
  11. Now, if you check the IP address again, you should see a newly IP address listed:
    $ ifconfig eth0
    
  12. Assuming that you have received a proper IP address, let's move on to the default gateway:
    $ ip route
    
    How to do it…
  13. The preceding command lists our default route. In my case, it is 10.0.2.2. Let's try to ping the default gateway:
    $ ping –c 5 10.0.2.2
    
  14. If you get a response from the gateway, this means that your local network is working properly. If you do not get a response from gateway, you may want to check your local firewall.
  15. Check the firewall status:
    $ sudo ufw status
    
  16. Check the rules or temporarily disable the firewall and retry reaching your gateway:
    $ sudo ufw disable
    
  17. Next, check whether we can go beyond our gateway. Try to ping an external server. I am trying to ping a public DNS server by Google:
    $ ping -c 5 8.8.8.8
    
  18. If you successfully receive a response, then you have a working network connection. If this does not work, then you can check the problem with the mtr command. This command will display each router between your server and the destination server:
    $ mtr -r -c 1 8.8.8.8
    
    How to do it…
  19. Next, we need to check DNS servers:
    $ nslookup www.ubuntu.com
    
    How to do it…
  20. If you received an IP address for Ubuntu servers, then the DNS connection is working properly. If it's not, you can try changing the DNS servers temporarily. Add the nameserver entry to /etc/resolve.conf above other nameserver, if any:
    nameserver 8.8.8.8
    
  21. At this point, you should be able to access the Internet. Try to ping an external server by its name:
    $ ping -c 3 www.ubuntu.com
    
    How to do it…

There's more…

The following are some additional commands that may come handy while working with a network:

  • lspci lists all pci devices. Combine it with grep to search for specific device.
  • Lsmod shows the status of modules in Linux kernels.
  • ip link lists all the available network devices with status and configuration parameters.
  • ip addr shows the IP addresses assigned for each device.
  • ip route displays routing table entries.
  • tracepath/traceroute lists all the routers (path) between local and remote hosts.
  • iptables is an administration tool for packet filtering and NAT.
  • dig is a DNS lookup utility.
  • ethtool queries and controls network drivers and hardware settings.
  • route views or edits the IP routing table.
  • telnet was the interface for telnet protocol. Now it is a simple tool to quickly check remote working ports.
  • Nmap is a powerful network mapping tool.
  • netstat displays network connections, routing tables, interface stats, and more.
  • ifdown and ifup start or stop the network interface. They are similar to ifconfig down or ifconfig up.

Getting ready

As you are reading this recipe, I am assuming that you are facing a networking issue. Also, I am assuming that the problems are with a primary network adapter, eth0.

You may need access to root account or account with similar privileges.

How to do it…

Follow these steps to troubleshoot network connectivity:

  1. Let's start with checking the network card. If it is working properly and is detected by Ubuntu. Check boot time logs and search for lines related to Ethernet, eth:
    $ dmesg | grep eth
    
    How to do it…
  2. If you don't find anything in the boot logs, then most probably, your network hardware is faulty or unsupported by Ubuntu.
  3. Next, check whether the network cable is plugged in and is working properly. You can simply check the LED indicators on the network card or use the following command:
    $ sudo mii-tool
    
    How to do it…
  4. If you can see a line with link ok, then you have a working Ethernet connection.
  5. Next, check whether a proper IP address is assigned to the eth0 Ethernet port:
    $ ifconfig eth0
    
    How to do it…
  6. Check whether you can find a line that starts with inet addr. If you cannot find this line or it is listed as inet addr 169.254, then you don't have an IP address assigned.
  7. Even if you see a line stating the IP address, make sure that it is valid for network that you are connected to.
  8. Now assuming that you have not assigned an IP address, let's try to get dynamic IP address from the DHCP server. Make sure that eth0 is set for dynamic configuration. You should see line similar to iface eth0 inet dhcp:
    $ cat /etc/network/interfaces
    
    How to do it…
  9. Execute the dhclient command to query the local DHCP server:
    $ sudo dhclient -v
    
    How to do it…
  10. If you can see a line similar to bound to 10.0.2.15, then you are assigned with a new IP address. If you keep getting DHCPDISCOVER messages, this means that your DHCP server is not accessible or not assigning an IP address to this client.
  11. Now, if you check the IP address again, you should see a newly IP address listed:
    $ ifconfig eth0
    
  12. Assuming that you have received a proper IP address, let's move on to the default gateway:
    $ ip route
    
    How to do it…
  13. The preceding command lists our default route. In my case, it is 10.0.2.2. Let's try to ping the default gateway:
    $ ping –c 5 10.0.2.2
    
  14. If you get a response from the gateway, this means that your local network is working properly. If you do not get a response from gateway, you may want to check your local firewall.
  15. Check the firewall status:
    $ sudo ufw status
    
  16. Check the rules or temporarily disable the firewall and retry reaching your gateway:
    $ sudo ufw disable
    
  17. Next, check whether we can go beyond our gateway. Try to ping an external server. I am trying to ping a public DNS server by Google:
    $ ping -c 5 8.8.8.8
    
  18. If you successfully receive a response, then you have a working network connection. If this does not work, then you can check the problem with the mtr command. This command will display each router between your server and the destination server:
    $ mtr -r -c 1 8.8.8.8
    
    How to do it…
  19. Next, we need to check DNS servers:
    $ nslookup www.ubuntu.com
    
    How to do it…
  20. If you received an IP address for Ubuntu servers, then the DNS connection is working properly. If it's not, you can try changing the DNS servers temporarily. Add the nameserver entry to /etc/resolve.conf above other nameserver, if any:
    nameserver 8.8.8.8
    
  21. At this point, you should be able to access the Internet. Try to ping an external server by its name:
    $ ping -c 3 www.ubuntu.com
    
    How to do it…

There's more…

The following are some additional commands that may come handy while working with a network:

  • lspci lists all pci devices. Combine it with grep to search for specific device.
  • Lsmod shows the status of modules in Linux kernels.
  • ip link lists all the available network devices with status and configuration parameters.
  • ip addr shows the IP addresses assigned for each device.
  • ip route displays routing table entries.
  • tracepath/traceroute lists all the routers (path) between local and remote hosts.
  • iptables is an administration tool for packet filtering and NAT.
  • dig is a DNS lookup utility.
  • ethtool queries and controls network drivers and hardware settings.
  • route views or edits the IP routing table.
  • telnet was the interface for telnet protocol. Now it is a simple tool to quickly check remote working ports.
  • Nmap is a powerful network mapping tool.
  • netstat displays network connections, routing tables, interface stats, and more.
  • ifdown and ifup start or stop the network interface. They are similar to ifconfig down or ifconfig up.

How to do it…

Follow these steps to troubleshoot network connectivity:

  1. Let's start with checking the network card. If it is working properly and is detected by Ubuntu. Check boot time logs and search for lines related to Ethernet, eth:
    $ dmesg | grep eth
    
    How to do it…
  2. If you don't find anything in the boot logs, then most probably, your network hardware is faulty or unsupported by Ubuntu.
  3. Next, check whether the network cable is plugged in and is working properly. You can simply check the LED indicators on the network card or use the following command:
    $ sudo mii-tool
    
    How to do it…
  4. If you can see a line with link ok, then you have a working Ethernet connection.
  5. Next, check whether a proper IP address is assigned to the eth0 Ethernet port:
    $ ifconfig eth0
    
    How to do it…
  6. Check whether you can find a line that starts with inet addr. If you cannot find this line or it is listed as inet addr 169.254, then you don't have an IP address assigned.
  7. Even if you see a line stating the IP address, make sure that it is valid for network that you are connected to.
  8. Now assuming that you have not assigned an IP address, let's try to get dynamic IP address from the DHCP server. Make sure that eth0 is set for dynamic configuration. You should see line similar to iface eth0 inet dhcp:
    $ cat /etc/network/interfaces
    
    How to do it…
  9. Execute the dhclient command to query the local DHCP server:
    $ sudo dhclient -v
    
    How to do it…
  10. If you can see a line similar to bound to 10.0.2.15, then you are assigned with a new IP address. If you keep getting DHCPDISCOVER messages, this means that your DHCP server is not accessible or not assigning an IP address to this client.
  11. Now, if you check the IP address again, you should see a newly IP address listed:
    $ ifconfig eth0
    
  12. Assuming that you have received a proper IP address, let's move on to the default gateway:
    $ ip route
    
    How to do it…
  13. The preceding command lists our default route. In my case, it is 10.0.2.2. Let's try to ping the default gateway:
    $ ping –c 5 10.0.2.2
    
  14. If you get a response from the gateway, this means that your local network is working properly. If you do not get a response from gateway, you may want to check your local firewall.
  15. Check the firewall status:
    $ sudo ufw status
    
  16. Check the rules or temporarily disable the firewall and retry reaching your gateway:
    $ sudo ufw disable
    
  17. Next, check whether we can go beyond our gateway. Try to ping an external server. I am trying to ping a public DNS server by Google:
    $ ping -c 5 8.8.8.8
    
  18. If you successfully receive a response, then you have a working network connection. If this does not work, then you can check the problem with the mtr command. This command will display each router between your server and the destination server:
    $ mtr -r -c 1 8.8.8.8
    
    How to do it…
  19. Next, we need to check DNS servers:
    $ nslookup www.ubuntu.com
    
    How to do it…
  20. If you received an IP address for Ubuntu servers, then the DNS connection is working properly. If it's not, you can try changing the DNS servers temporarily. Add the nameserver entry to /etc/resolve.conf above other nameserver, if any:
    nameserver 8.8.8.8
    
  21. At this point, you should be able to access the Internet. Try to ping an external server by its name:
    $ ping -c 3 www.ubuntu.com
    
    How to do it…

There's more…

The following are some additional commands that may come handy while working with a network:

  • lspci lists all pci devices. Combine it with grep to search for specific device.
  • Lsmod shows the status of modules in Linux kernels.
  • ip link lists all the available network devices with status and configuration parameters.
  • ip addr shows the IP addresses assigned for each device.
  • ip route displays routing table entries.
  • tracepath/traceroute lists all the routers (path) between local and remote hosts.
  • iptables is an administration tool for packet filtering and NAT.
  • dig is a DNS lookup utility.
  • ethtool queries and controls network drivers and hardware settings.
  • route views or edits the IP routing table.
  • telnet was the interface for telnet protocol. Now it is a simple tool to quickly check remote working ports.
  • Nmap is a powerful network mapping tool.
  • netstat displays network connections, routing tables, interface stats, and more.
  • ifdown and ifup start or stop the network interface. They are similar to ifconfig down or ifconfig up.

There's more…

The following are some additional commands that may come handy while working with a network:

  • lspci lists all pci devices. Combine it with grep to search for specific device.
  • Lsmod shows the status of modules in Linux kernels.
  • ip link lists all the available network devices with status and configuration parameters.
  • ip addr shows the IP addresses assigned for each device.
  • ip route displays routing table entries.
  • tracepath/traceroute lists all the routers (path) between local and remote hosts.
  • iptables is an administration tool for packet filtering and NAT.
  • dig is a DNS lookup utility.
  • ethtool queries and controls network drivers and hardware settings.
  • route views or edits the IP routing table.
  • telnet was the interface for telnet protocol. Now it is a simple tool to quickly check remote working ports.
  • Nmap is a powerful network mapping tool.
  • netstat displays network connections, routing tables, interface stats, and more.
  • ifdown and ifup start or stop the network interface. They are similar to ifconfig down or ifconfig up.

Securing remote access with OpenVPN

VPN enables two or more systems to communicate privately and securely over the public network or Internet. The network traffic is routed through the Internet, but is encrypted. You can use VPN to set up a secure connection between two datacenters or to access office resources from the leisure of your home. The VPN service is also used to protect your online activities, access location restricted contents, and bypass restrictions imposed by your ISP.

VPN services are implemented with a number of different protocols, such as Point-to-Point Tunneling Protocol (PPTP), Layer two tunneling protocol (L2TP), IPSec, and SSL. In this recipe, we will set up a free VPN server, OpenVPN. OpenVPN is an open source SSL VPN solution and provides a wide range of configurations. OpenVPN can be configured to use either TCP or UDP protocols. In this recipe, we will set up OpenVPN with its default UDP port 1194.

Getting ready…

You will need one server and one client system and root or equivalent access to both systems.

How to do it…

  1. Install OpenVPN with the following command:
    $ sudo apt-get update
    $ sudo apt-get install openvpn easy-rsa
    
  2. Now, set up your own certification authority and generate certificate and keys for the OpenVPN server.
  3. Next, we need to edit the OpenVPN files that are owned by the root user, and the build-ca script needs root access while writing new keys. Temporarily, change to root account using sudo su:
    $ sudo su
    

    Copy the Easy-RSA directory to /etc/openvpn:

    # cp -r /usr/share/easy-rsa  /etc/openvpn/
    
  4. Now edit /etc/openvpn/easy-rsa/vars and change the variables to match your environment:
      export KEY_COUNTRY="US"
      export KEY_PROVINCE="ca"
      export KEY_CITY="your city"
      export KEY_ORG="your Company"
      export KEY_EMAIL="[email protected]"
      export KEY_CN="MyVPN"
      export KEY_NAME="MyVPN"
      export KEY_OU="MyVPN"
    
  5. Generate a Master certificate with the following commands:
    # cd /etc/openvpn/easy-vars
    # source vars
    # ./clean-all
    # ./build-ca
    
  6. Next, generate a certificate and private key for the server. Replace the server name with the name of your server:
    # ./build-key-server servername
    
  7. Press the Enter key when prompted for the password and company name.
  8. When asked for signing the certificate, enter y and then press the Enter key.
  9. Build Diffie Hellman parameters for the OpenVPN server:
    # ./build-dh
    
  10. Copy all the generated keys and certificates to /etc/openvpn:
    # cp /etc/openvpn/easy-rsa/keys/{servername.crt, servername.key, ca.crt, dh2048.pem}  /etc/openvpn
    
  11. Next, generate a certificate for the client with the following commands:
    # cd /etc/openvpn/easy-rsa
    # source vars
    # ./build-key clientname
    
  12. Copy the generated key, certificate, and server certificate to the client system. Use a secure transfer mechanism such as SCP:
    /etc/openvpn/ca.crt
    /etc/openvpn/easy-rsa/keys/clientname.crt
    /etc/openvpn/easy-rsa/keys/clientname.key
    
  13. Now, configure the OpenVPN server. Use the sample configuration files provided by OpenVPN:
    $ gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz > /etc/openvpn/server.conf
    
  14. Open server.conf in your favorite editor:
    # nano /etc/openvpn/server.conf
    
  15. Make sure that the certificate and key path are properly set:
    ca ca.crt
    cert servername.crt
    key servername.key
    dh dh2048.pen
    
  16. Enable clients to redirect their web traffic through a VPN server. Uncomment the following line:
    push "redirect-gateway def1 bypass-dhcp"
    
  17. To protect against DNS leaks, push DNS settings to VPN clients and uncomment the following lines:
    push "dhcp-option DNS 208.67.222.222"
    push "dhcp-option DNS 208.67.220.220"
    
  18. The preceding lines point to OpenDNS servers. You can set them to any DNS server of your choice.
  19. Lastly, set OpenVPN to run with unprivileged user and group and uncomment the following lines:
    user nobody
    group nogroup
    
  20. Optionally, you can enable compression on the VPN link. Search and uncomment the following line:
    comp-lzo
    
  21. Save the changes and exit the editor.
  22. Next, edit /etc/sysctl to enable IP forwarding. Find and uncomment the following line by removing the hash, #, in front of it:
    #net.ipv4.ip_forward=1
    
  23. Update sysctl settings with the following command:
    # sysctl -p
    
  24. Now start the server. You should see an output similar to the following:
    # service openvpn start
     * Starting virtual private network daemon(s)
     *   Autostarting VPN 'server'
    
  25. When it starts successfully, OpenVPN creates a new network interface named tun0. This can be checked with the ifconfig command:
    # ifconfig tun0
    tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
              inet addr:10.8.0.1  P-t-P:10.8.0.2  Mask:255.255.255.255
    
  26. If the server does not start normally, you can check the logs at /var/log/syslog. It should list all the steps completed by the OpenVPN service.

How it works…

OpenVPN is the open source VPN solution. It is a traffic-tunneling protocol that works in client-server mode. You might already know that VPN is widely used to create a private and secure network connection between two endpoints. It is generally used to access your servers or access office systems from your home. The other popular use of VPN servers is to protect your privacy by routing your traffic through a VPN server. OpenVPN needs two primary components, namely a server and a client. The preceding recipe installs the server component. When the OpenVPN service is started on the OpenVPN host, it creates a new virtual network interface, a tun device named tun0. On the client side, OpenVPN provides the client with tools that configure the client with a similar setup by creating a tap device on the client's system.

Once the client is configured with a server hostname or IP address, a server certificate, and client keys, the client initiates a virtual network connection using a tap device on client to a tun device on the server. The provided keys and certificate are used to cross-check server authenticity and then authenticate itself. As the session is established, all network traffic on the client system is routed or tunneled via a tap network interface. All the external services that are accessed by the OpenVPN client, and you get to see the requests as if they are originated from the OpenVPN server and not from the client. Additionally, the traffic between the server and client is encrypted to provide additional security.

There's more…

In this recipe we have installed and configured OpenVPN server. To use the VPN service from your local system you will need a VPN client tool.

Following are the steps to install and configure VPN client on Ubuntu systems:

  1. Install the OpenVPN client with a similar command the one we used to install the server:
    $ sudo apt-get update
    $ sudo apt-get install openvpn
    
  2. Copy the sample client.conf configuration file:
    $ sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/
    
  3. Copy the certificates and keys generated for this client:
    $ scp user@yourvpnserver:/etc/openvpn/easy-rsa/keys/client1.key /etc/openvpn
    
  4. You can use other tools such as SFTP or WinSCP on the Windows systems.
  5. Now edit client.conf, enable client mode, and specify the server name or address:
    client
    remote your.vpnserver.com 1194
    
  6. Make sure that you have set the correct path for keys copied from the server.
  7. Now save the configuration file and start the OpenVPN server:
    $ service openvpn start
    
  8. This should create the tun0 network interface:
    $ ifconfig tun0
    
  9. Check the new routes created by VPN:
    $ netstat -rn
    
  10. You can test your VPN connection with any What's My IP service. You can also take a DNS leak test with online DNS leak tests.

    For Windows and Mac OS systems, OpenVPN provides respective client tools. You need an OpenVPN profile with the .ovpn extension. A template can be found with the OpenVPN client you are using or on the server under OpenVPN examples. The following is the complete path:

    /usr/share/doc/openvpn/examples/sample-config-files/client.conf
    

Note

Note that OpenVPN provides a web-based admin interface to manage VPN clients. This is a commercial offering that provides an easy-to-use admin interface to manage OpenVPN settings and client certificates.

Getting ready…

You will need one server and one client system and root or equivalent access to both systems.

How to do it…

  1. Install OpenVPN with the following command:
    $ sudo apt-get update
    $ sudo apt-get install openvpn easy-rsa
    
  2. Now, set up your own certification authority and generate certificate and keys for the OpenVPN server.
  3. Next, we need to edit the OpenVPN files that are owned by the root user, and the build-ca script needs root access while writing new keys. Temporarily, change to root account using sudo su:
    $ sudo su
    

    Copy the Easy-RSA directory to /etc/openvpn:

    # cp -r /usr/share/easy-rsa  /etc/openvpn/
    
  4. Now edit /etc/openvpn/easy-rsa/vars and change the variables to match your environment:
      export KEY_COUNTRY="US"
      export KEY_PROVINCE="ca"
      export KEY_CITY="your city"
      export KEY_ORG="your Company"
      export KEY_EMAIL="[email protected]"
      export KEY_CN="MyVPN"
      export KEY_NAME="MyVPN"
      export KEY_OU="MyVPN"
    
  5. Generate a Master certificate with the following commands:
    # cd /etc/openvpn/easy-vars
    # source vars
    # ./clean-all
    # ./build-ca
    
  6. Next, generate a certificate and private key for the server. Replace the server name with the name of your server:
    # ./build-key-server servername
    
  7. Press the Enter key when prompted for the password and company name.
  8. When asked for signing the certificate, enter y and then press the Enter key.
  9. Build Diffie Hellman parameters for the OpenVPN server:
    # ./build-dh
    
  10. Copy all the generated keys and certificates to /etc/openvpn:
    # cp /etc/openvpn/easy-rsa/keys/{servername.crt, servername.key, ca.crt, dh2048.pem}  /etc/openvpn
    
  11. Next, generate a certificate for the client with the following commands:
    # cd /etc/openvpn/easy-rsa
    # source vars
    # ./build-key clientname
    
  12. Copy the generated key, certificate, and server certificate to the client system. Use a secure transfer mechanism such as SCP:
    /etc/openvpn/ca.crt
    /etc/openvpn/easy-rsa/keys/clientname.crt
    /etc/openvpn/easy-rsa/keys/clientname.key
    
  13. Now, configure the OpenVPN server. Use the sample configuration files provided by OpenVPN:
    $ gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz > /etc/openvpn/server.conf
    
  14. Open server.conf in your favorite editor:
    # nano /etc/openvpn/server.conf
    
  15. Make sure that the certificate and key path are properly set:
    ca ca.crt
    cert servername.crt
    key servername.key
    dh dh2048.pen
    
  16. Enable clients to redirect their web traffic through a VPN server. Uncomment the following line:
    push "redirect-gateway def1 bypass-dhcp"
    
  17. To protect against DNS leaks, push DNS settings to VPN clients and uncomment the following lines:
    push "dhcp-option DNS 208.67.222.222"
    push "dhcp-option DNS 208.67.220.220"
    
  18. The preceding lines point to OpenDNS servers. You can set them to any DNS server of your choice.
  19. Lastly, set OpenVPN to run with unprivileged user and group and uncomment the following lines:
    user nobody
    group nogroup
    
  20. Optionally, you can enable compression on the VPN link. Search and uncomment the following line:
    comp-lzo
    
  21. Save the changes and exit the editor.
  22. Next, edit /etc/sysctl to enable IP forwarding. Find and uncomment the following line by removing the hash, #, in front of it:
    #net.ipv4.ip_forward=1
    
  23. Update sysctl settings with the following command:
    # sysctl -p
    
  24. Now start the server. You should see an output similar to the following:
    # service openvpn start
     * Starting virtual private network daemon(s)
     *   Autostarting VPN 'server'
    
  25. When it starts successfully, OpenVPN creates a new network interface named tun0. This can be checked with the ifconfig command:
    # ifconfig tun0
    tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
              inet addr:10.8.0.1  P-t-P:10.8.0.2  Mask:255.255.255.255
    
  26. If the server does not start normally, you can check the logs at /var/log/syslog. It should list all the steps completed by the OpenVPN service.

How it works…

OpenVPN is the open source VPN solution. It is a traffic-tunneling protocol that works in client-server mode. You might already know that VPN is widely used to create a private and secure network connection between two endpoints. It is generally used to access your servers or access office systems from your home. The other popular use of VPN servers is to protect your privacy by routing your traffic through a VPN server. OpenVPN needs two primary components, namely a server and a client. The preceding recipe installs the server component. When the OpenVPN service is started on the OpenVPN host, it creates a new virtual network interface, a tun device named tun0. On the client side, OpenVPN provides the client with tools that configure the client with a similar setup by creating a tap device on the client's system.

Once the client is configured with a server hostname or IP address, a server certificate, and client keys, the client initiates a virtual network connection using a tap device on client to a tun device on the server. The provided keys and certificate are used to cross-check server authenticity and then authenticate itself. As the session is established, all network traffic on the client system is routed or tunneled via a tap network interface. All the external services that are accessed by the OpenVPN client, and you get to see the requests as if they are originated from the OpenVPN server and not from the client. Additionally, the traffic between the server and client is encrypted to provide additional security.

There's more…

In this recipe we have installed and configured OpenVPN server. To use the VPN service from your local system you will need a VPN client tool.

Following are the steps to install and configure VPN client on Ubuntu systems:

  1. Install the OpenVPN client with a similar command the one we used to install the server:
    $ sudo apt-get update
    $ sudo apt-get install openvpn
    
  2. Copy the sample client.conf configuration file:
    $ sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/
    
  3. Copy the certificates and keys generated for this client:
    $ scp user@yourvpnserver:/etc/openvpn/easy-rsa/keys/client1.key /etc/openvpn
    
  4. You can use other tools such as SFTP or WinSCP on the Windows systems.
  5. Now edit client.conf, enable client mode, and specify the server name or address:
    client
    remote your.vpnserver.com 1194
    
  6. Make sure that you have set the correct path for keys copied from the server.
  7. Now save the configuration file and start the OpenVPN server:
    $ service openvpn start
    
  8. This should create the tun0 network interface:
    $ ifconfig tun0
    
  9. Check the new routes created by VPN:
    $ netstat -rn
    
  10. You can test your VPN connection with any What's My IP service. You can also take a DNS leak test with online DNS leak tests.

    For Windows and Mac OS systems, OpenVPN provides respective client tools. You need an OpenVPN profile with the .ovpn extension. A template can be found with the OpenVPN client you are using or on the server under OpenVPN examples. The following is the complete path:

    /usr/share/doc/openvpn/examples/sample-config-files/client.conf
    

Note

Note that OpenVPN provides a web-based admin interface to manage VPN clients. This is a commercial offering that provides an easy-to-use admin interface to manage OpenVPN settings and client certificates.

How to do it…

  1. Install OpenVPN with the following command:
    $ sudo apt-get update
    $ sudo apt-get install openvpn easy-rsa
    
  2. Now, set up your own certification authority and generate certificate and keys for the OpenVPN server.
  3. Next, we need to edit the OpenVPN files that are owned by the root user, and the build-ca script needs root access while writing new keys. Temporarily, change to root account using sudo su:
    $ sudo su
    

    Copy the Easy-RSA directory to /etc/openvpn:

    # cp -r /usr/share/easy-rsa  /etc/openvpn/
    
  4. Now edit /etc/openvpn/easy-rsa/vars and change the variables to match your environment:
      export KEY_COUNTRY="US"
      export KEY_PROVINCE="ca"
      export KEY_CITY="your city"
      export KEY_ORG="your Company"
      export KEY_EMAIL="[email protected]"
      export KEY_CN="MyVPN"
      export KEY_NAME="MyVPN"
      export KEY_OU="MyVPN"
    
  5. Generate a Master certificate with the following commands:
    # cd /etc/openvpn/easy-vars
    # source vars
    # ./clean-all
    # ./build-ca
    
  6. Next, generate a certificate and private key for the server. Replace the server name with the name of your server:
    # ./build-key-server servername
    
  7. Press the Enter key when prompted for the password and company name.
  8. When asked for signing the certificate, enter y and then press the Enter key.
  9. Build Diffie Hellman parameters for the OpenVPN server:
    # ./build-dh
    
  10. Copy all the generated keys and certificates to /etc/openvpn:
    # cp /etc/openvpn/easy-rsa/keys/{servername.crt, servername.key, ca.crt, dh2048.pem}  /etc/openvpn
    
  11. Next, generate a certificate for the client with the following commands:
    # cd /etc/openvpn/easy-rsa
    # source vars
    # ./build-key clientname
    
  12. Copy the generated key, certificate, and server certificate to the client system. Use a secure transfer mechanism such as SCP:
    /etc/openvpn/ca.crt
    /etc/openvpn/easy-rsa/keys/clientname.crt
    /etc/openvpn/easy-rsa/keys/clientname.key
    
  13. Now, configure the OpenVPN server. Use the sample configuration files provided by OpenVPN:
    $ gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz > /etc/openvpn/server.conf
    
  14. Open server.conf in your favorite editor:
    # nano /etc/openvpn/server.conf
    
  15. Make sure that the certificate and key path are properly set:
    ca ca.crt
    cert servername.crt
    key servername.key
    dh dh2048.pen
    
  16. Enable clients to redirect their web traffic through a VPN server. Uncomment the following line:
    push "redirect-gateway def1 bypass-dhcp"
    
  17. To protect against DNS leaks, push DNS settings to VPN clients and uncomment the following lines:
    push "dhcp-option DNS 208.67.222.222"
    push "dhcp-option DNS 208.67.220.220"
    
  18. The preceding lines point to OpenDNS servers. You can set them to any DNS server of your choice.
  19. Lastly, set OpenVPN to run with unprivileged user and group and uncomment the following lines:
    user nobody
    group nogroup
    
  20. Optionally, you can enable compression on the VPN link. Search and uncomment the following line:
    comp-lzo
    
  21. Save the changes and exit the editor.
  22. Next, edit /etc/sysctl to enable IP forwarding. Find and uncomment the following line by removing the hash, #, in front of it:
    #net.ipv4.ip_forward=1
    
  23. Update sysctl settings with the following command:
    # sysctl -p
    
  24. Now start the server. You should see an output similar to the following:
    # service openvpn start
     * Starting virtual private network daemon(s)
     *   Autostarting VPN 'server'
    
  25. When it starts successfully, OpenVPN creates a new network interface named tun0. This can be checked with the ifconfig command:
    # ifconfig tun0
    tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
              inet addr:10.8.0.1  P-t-P:10.8.0.2  Mask:255.255.255.255
    
  26. If the server does not start normally, you can check the logs at /var/log/syslog. It should list all the steps completed by the OpenVPN service.

How it works…

OpenVPN is the open source VPN solution. It is a traffic-tunneling protocol that works in client-server mode. You might already know that VPN is widely used to create a private and secure network connection between two endpoints. It is generally used to access your servers or access office systems from your home. The other popular use of VPN servers is to protect your privacy by routing your traffic through a VPN server. OpenVPN needs two primary components, namely a server and a client. The preceding recipe installs the server component. When the OpenVPN service is started on the OpenVPN host, it creates a new virtual network interface, a tun device named tun0. On the client side, OpenVPN provides the client with tools that configure the client with a similar setup by creating a tap device on the client's system.

Once the client is configured with a server hostname or IP address, a server certificate, and client keys, the client initiates a virtual network connection using a tap device on client to a tun device on the server. The provided keys and certificate are used to cross-check server authenticity and then authenticate itself. As the session is established, all network traffic on the client system is routed or tunneled via a tap network interface. All the external services that are accessed by the OpenVPN client, and you get to see the requests as if they are originated from the OpenVPN server and not from the client. Additionally, the traffic between the server and client is encrypted to provide additional security.

There's more…

In this recipe we have installed and configured OpenVPN server. To use the VPN service from your local system you will need a VPN client tool.

Following are the steps to install and configure VPN client on Ubuntu systems:

  1. Install the OpenVPN client with a similar command the one we used to install the server:
    $ sudo apt-get update
    $ sudo apt-get install openvpn
    
  2. Copy the sample client.conf configuration file:
    $ sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/
    
  3. Copy the certificates and keys generated for this client:
    $ scp user@yourvpnserver:/etc/openvpn/easy-rsa/keys/client1.key /etc/openvpn
    
  4. You can use other tools such as SFTP or WinSCP on the Windows systems.
  5. Now edit client.conf, enable client mode, and specify the server name or address:
    client
    remote your.vpnserver.com 1194
    
  6. Make sure that you have set the correct path for keys copied from the server.
  7. Now save the configuration file and start the OpenVPN server:
    $ service openvpn start
    
  8. This should create the tun0 network interface:
    $ ifconfig tun0
    
  9. Check the new routes created by VPN:
    $ netstat -rn
    
  10. You can test your VPN connection with any What's My IP service. You can also take a DNS leak test with online DNS leak tests.

    For Windows and Mac OS systems, OpenVPN provides respective client tools. You need an OpenVPN profile with the .ovpn extension. A template can be found with the OpenVPN client you are using or on the server under OpenVPN examples. The following is the complete path:

    /usr/share/doc/openvpn/examples/sample-config-files/client.conf
    

Note

Note that OpenVPN provides a web-based admin interface to manage VPN clients. This is a commercial offering that provides an easy-to-use admin interface to manage OpenVPN settings and client certificates.

How it works…

OpenVPN is the open source VPN solution. It is a traffic-tunneling protocol that works in client-server mode. You might already know that VPN is widely used to create a private and secure network connection between two endpoints. It is generally used to access your servers or access office systems from your home. The other popular use of VPN servers is to protect your privacy by routing your traffic through a VPN server. OpenVPN needs two primary components, namely a server and a client. The preceding recipe installs the server component. When the OpenVPN service is started on the OpenVPN host, it creates a new virtual network interface, a tun device named tun0. On the client side, OpenVPN provides the client with tools that configure the client with a similar setup by creating a tap device on the client's system.

Once the client is configured with a server hostname or IP address, a server certificate, and client keys, the client initiates a virtual network connection using a tap device on client to a tun device on the server. The provided keys and certificate are used to cross-check server authenticity and then authenticate itself. As the session is established, all network traffic on the client system is routed or tunneled via a tap network interface. All the external services that are accessed by the OpenVPN client, and you get to see the requests as if they are originated from the OpenVPN server and not from the client. Additionally, the traffic between the server and client is encrypted to provide additional security.

There's more…

In this recipe we have installed and configured OpenVPN server. To use the VPN service from your local system you will need a VPN client tool.

Following are the steps to install and configure VPN client on Ubuntu systems:

  1. Install the OpenVPN client with a similar command the one we used to install the server:
    $ sudo apt-get update
    $ sudo apt-get install openvpn
    
  2. Copy the sample client.conf configuration file:
    $ sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/
    
  3. Copy the certificates and keys generated for this client:
    $ scp user@yourvpnserver:/etc/openvpn/easy-rsa/keys/client1.key /etc/openvpn
    
  4. You can use other tools such as SFTP or WinSCP on the Windows systems.
  5. Now edit client.conf, enable client mode, and specify the server name or address:
    client
    remote your.vpnserver.com 1194
    
  6. Make sure that you have set the correct path for keys copied from the server.
  7. Now save the configuration file and start the OpenVPN server:
    $ service openvpn start
    
  8. This should create the tun0 network interface:
    $ ifconfig tun0
    
  9. Check the new routes created by VPN:
    $ netstat -rn
    
  10. You can test your VPN connection with any What's My IP service. You can also take a DNS leak test with online DNS leak tests.

    For Windows and Mac OS systems, OpenVPN provides respective client tools. You need an OpenVPN profile with the .ovpn extension. A template can be found with the OpenVPN client you are using or on the server under OpenVPN examples. The following is the complete path:

    /usr/share/doc/openvpn/examples/sample-config-files/client.conf
    

Note

Note that OpenVPN provides a web-based admin interface to manage VPN clients. This is a commercial offering that provides an easy-to-use admin interface to manage OpenVPN settings and client certificates.

There's more…

In this recipe we have installed and configured OpenVPN server. To use the VPN service from your local system you will need a VPN client tool.

Following are the steps to install and configure VPN client on Ubuntu systems:

  1. Install the OpenVPN client with a similar command the one we used to install the server:
    $ sudo apt-get update
    $ sudo apt-get install openvpn
    
  2. Copy the sample client.conf configuration file:
    $ sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/
    
  3. Copy the certificates and keys generated for this client:
    $ scp user@yourvpnserver:/etc/openvpn/easy-rsa/keys/client1.key /etc/openvpn
    
  4. You can use other tools such as SFTP or WinSCP on the Windows systems.
  5. Now edit client.conf, enable client mode, and specify the server name or address:
    client
    remote your.vpnserver.com 1194
    
  6. Make sure that you have set the correct path for keys copied from the server.
  7. Now save the configuration file and start the OpenVPN server:
    $ service openvpn start
    
  8. This should create the tun0 network interface:
    $ ifconfig tun0
    
  9. Check the new routes created by VPN:
    $ netstat -rn
    
  10. You can test your VPN connection with any What's My IP service. You can also take a DNS leak test with online DNS leak tests.

    For Windows and Mac OS systems, OpenVPN provides respective client tools. You need an OpenVPN profile with the .ovpn extension. A template can be found with the OpenVPN client you are using or on the server under OpenVPN examples. The following is the complete path:

    /usr/share/doc/openvpn/examples/sample-config-files/client.conf
    

Note

Note that OpenVPN provides a web-based admin interface to manage VPN clients. This is a commercial offering that provides an easy-to-use admin interface to manage OpenVPN settings and client certificates.

Securing a network with uncomplicated firewall

It is said that the best way to improve server security is to reduce the attack surface. Network communication in any system happens with the help of logical network ports, be it TCP ports or UDP ports. One part of the attack surface is the number of open ports that are waiting for connection to be established. It is always a good idea to block all unrequired ports. Any traffic coming to these ports can be filtered, that is, allowed or blocked with the help of a filtering system.

The Linux kernel provides a built-in packet filtering mechanism called netfilter, which is used to filter the traffic coming in or going out of the system. All modern Linux firewall systems use netfilter under the hood. Iptables is a well-known and popular user interface to set up and manage filtering rules for netfilter. It is a complete firewall solution that is highly configurable and highly flexible. However, iptables need effort on the user's part to master the firewall setup. Various frontend tools have been developed to simplify the configuration of iptables. UFW is among the most popular frontend solutions to manage iptables.

Uncomplicated firewall (UFW) provides easy-to-use interface for people unfamiliar with firewall concepts. It provides a framework for managing netfilter as well as the command-line interface to manipulate the firewall. With its small command set and plain English parameters, UFW makes it quick and easy to understand and set up firewall rules. At the same time, you can use UFW to configure most of the rules possible with iptables. UFW comes preinstalled with all Ubuntu installations after version 8.04 LTS.

In this recipe, we will secure our Ubuntu server with the help of UFW and also look at some advance configurations possible with UFW.

Getting ready

You will need an access to a root account or an account with root privileges.

How to do it…

Follow these steps to secure network with uncomplicated firewall:

  1. UFW comes preinstalled on Ubuntu systems. If it's not, you can install it with the following commands:
    $ sudo apt-get udpate
    $ sudo apt-get install UFW
    
  2. Check the status of UFW:
    $ sudo ufw status
    
    How to do it…
  3. Add a new rule to allow SSH:
    $ sudo ufw allow ssh
    
  4. Alternatively, you can use a port number to open a particular port:
    $ sudo ufw allow 22
    
  5. Allow only TCP traffic over HTTP (port 80):
    $ sudo ufw allow http/tcp
    
    How to do it…
  6. Deny incoming FTP traffic:
    $ sudo ufw deny ftp
    
  7. Check all added rules before starting the firewall:
    $ sudo ufw show added
    
    How to do it…
  8. Now enable the firewall:
    $ sudo ufw enable
    
    How to do it…
  9. Check the ufw status, the verbose parameter is optional:
    $ sudo ufw status verbose
    
    How to do it…
  10. Get a numbered list of added rules:
    $ sudo ufw status numbered
    
    How to do it…
  11. You can also allow all ports in a range by specifying a port range:
    $ sudo ufw allow 1050:5000/tcp
    
  12. If you want to open all ports for a particular IP address, use the following command:
    $ sudo ufw allow from 10.0.2.100
    
  13. Alternatively, you can allow an entire subnet, as follows:
    $ sudo ufw allow from 10.0.2.0/24
    
  14. You can also allow or deny a specific port for a given IP address:
    $ sudo ufw allow from 10.0.2.100 to any port 2222 
    $ sudo ufw deny from 10.0.2.100 to any port 5223
    
  15. To specify a protocol in the preceding rule, use the following command:
    $ sudo ufw deny from 10.0.2.100 proto tcp to any port 5223
    
  16. Deleting rules:
    $ sudo ufw delete allow ftp
    
  17. Delete rules by specifying their numbers:
    $ sudo ufw status numbered
    $ sudo ufw delete 2
    
  18. Add a new rule at a specific number:
    $ sudo ufw insert 1 allow 5222/tcp	# Inserts a rule at number 1
    
  19. If you want to reject outgoing FTP connections, you can use the following command:
    $ sudo ufw reject out ftp
    
  20. UFW also supports application profiles. To view all application profiles, use the following command:
    $ sudo ufw app list
    
  21. Get more information about the app profile using the following command:
    $ sudo ufw app info OpenSSH
    
  22. Allow the application profile as follows:
    $ sudo ufw allow OpenSSH
    
  23. Set ufw logging levels [off|low|medium|high|full] with the help of the following command:
    $ sudo ufw logging medium
    
  24. View firewall reports with the show parameter:
    $ sudo ufw show added    # list of rules added
    $ sudo ufw show raw    # show complete firewall
    
  25. Reset ufw to its default state (all rules will be backed up by UFW):
    $ sudo ufw reset
    

There's more…

UFW also provides various configuration files that can be used:

  • /etc/default/ufw: This is the main configuration file.
  • /etc/ufw/sysctl.conf: These are the kernel network variables. Variables in this file override variables in /etc/sysctl.conf.
  • /var/lib/ufw/user[6].rules or /lib/ufw/user[6].rules are the rules added via the ufw command.
  • /etc/ufw/before.init are the scripts to be run before the UFW initialization.
  • /etc/ufw/after.init are the scripts to be run after the UFW initialization.

See also

  • Check logging section of the UFW community page for an explanation of UFW logs at https://help.ubuntu.com/community/UFW
  • Check out the UFW manual pages with the following command:
    $ man ufw
    

Getting ready

You will need an access to a root account or an account with root privileges.

How to do it…

Follow these steps to secure network with uncomplicated firewall:

  1. UFW comes preinstalled on Ubuntu systems. If it's not, you can install it with the following commands:
    $ sudo apt-get udpate
    $ sudo apt-get install UFW
    
  2. Check the status of UFW:
    $ sudo ufw status
    
    How to do it…
  3. Add a new rule to allow SSH:
    $ sudo ufw allow ssh
    
  4. Alternatively, you can use a port number to open a particular port:
    $ sudo ufw allow 22
    
  5. Allow only TCP traffic over HTTP (port 80):
    $ sudo ufw allow http/tcp
    
    How to do it…
  6. Deny incoming FTP traffic:
    $ sudo ufw deny ftp
    
  7. Check all added rules before starting the firewall:
    $ sudo ufw show added
    
    How to do it…
  8. Now enable the firewall:
    $ sudo ufw enable
    
    How to do it…
  9. Check the ufw status, the verbose parameter is optional:
    $ sudo ufw status verbose
    
    How to do it…
  10. Get a numbered list of added rules:
    $ sudo ufw status numbered
    
    How to do it…
  11. You can also allow all ports in a range by specifying a port range:
    $ sudo ufw allow 1050:5000/tcp
    
  12. If you want to open all ports for a particular IP address, use the following command:
    $ sudo ufw allow from 10.0.2.100
    
  13. Alternatively, you can allow an entire subnet, as follows:
    $ sudo ufw allow from 10.0.2.0/24
    
  14. You can also allow or deny a specific port for a given IP address:
    $ sudo ufw allow from 10.0.2.100 to any port 2222 
    $ sudo ufw deny from 10.0.2.100 to any port 5223
    
  15. To specify a protocol in the preceding rule, use the following command:
    $ sudo ufw deny from 10.0.2.100 proto tcp to any port 5223
    
  16. Deleting rules:
    $ sudo ufw delete allow ftp
    
  17. Delete rules by specifying their numbers:
    $ sudo ufw status numbered
    $ sudo ufw delete 2
    
  18. Add a new rule at a specific number:
    $ sudo ufw insert 1 allow 5222/tcp	# Inserts a rule at number 1
    
  19. If you want to reject outgoing FTP connections, you can use the following command:
    $ sudo ufw reject out ftp
    
  20. UFW also supports application profiles. To view all application profiles, use the following command:
    $ sudo ufw app list
    
  21. Get more information about the app profile using the following command:
    $ sudo ufw app info OpenSSH
    
  22. Allow the application profile as follows:
    $ sudo ufw allow OpenSSH
    
  23. Set ufw logging levels [off|low|medium|high|full] with the help of the following command:
    $ sudo ufw logging medium
    
  24. View firewall reports with the show parameter:
    $ sudo ufw show added    # list of rules added
    $ sudo ufw show raw    # show complete firewall
    
  25. Reset ufw to its default state (all rules will be backed up by UFW):
    $ sudo ufw reset
    

There's more…

UFW also provides various configuration files that can be used:

  • /etc/default/ufw: This is the main configuration file.
  • /etc/ufw/sysctl.conf: These are the kernel network variables. Variables in this file override variables in /etc/sysctl.conf.
  • /var/lib/ufw/user[6].rules or /lib/ufw/user[6].rules are the rules added via the ufw command.
  • /etc/ufw/before.init are the scripts to be run before the UFW initialization.
  • /etc/ufw/after.init are the scripts to be run after the UFW initialization.

See also

  • Check logging section of the UFW community page for an explanation of UFW logs at https://help.ubuntu.com/community/UFW
  • Check out the UFW manual pages with the following command:
    $ man ufw
    

How to do it…

Follow these steps to secure network with uncomplicated firewall:

  1. UFW comes preinstalled on Ubuntu systems. If it's not, you can install it with the following commands:
    $ sudo apt-get udpate
    $ sudo apt-get install UFW
    
  2. Check the status of UFW:
    $ sudo ufw status
    
    How to do it…
  3. Add a new rule to allow SSH:
    $ sudo ufw allow ssh
    
  4. Alternatively, you can use a port number to open a particular port:
    $ sudo ufw allow 22
    
  5. Allow only TCP traffic over HTTP (port 80):
    $ sudo ufw allow http/tcp
    
    How to do it…
  6. Deny incoming FTP traffic:
    $ sudo ufw deny ftp
    
  7. Check all added rules before starting the firewall:
    $ sudo ufw show added
    
    How to do it…
  8. Now enable the firewall:
    $ sudo ufw enable
    
    How to do it…
  9. Check the ufw status, the verbose parameter is optional:
    $ sudo ufw status verbose
    
    How to do it…
  10. Get a numbered list of added rules:
    $ sudo ufw status numbered
    
    How to do it…
  11. You can also allow all ports in a range by specifying a port range:
    $ sudo ufw allow 1050:5000/tcp
    
  12. If you want to open all ports for a particular IP address, use the following command:
    $ sudo ufw allow from 10.0.2.100
    
  13. Alternatively, you can allow an entire subnet, as follows:
    $ sudo ufw allow from 10.0.2.0/24
    
  14. You can also allow or deny a specific port for a given IP address:
    $ sudo ufw allow from 10.0.2.100 to any port 2222 
    $ sudo ufw deny from 10.0.2.100 to any port 5223
    
  15. To specify a protocol in the preceding rule, use the following command:
    $ sudo ufw deny from 10.0.2.100 proto tcp to any port 5223
    
  16. Deleting rules:
    $ sudo ufw delete allow ftp
    
  17. Delete rules by specifying their numbers:
    $ sudo ufw status numbered
    $ sudo ufw delete 2
    
  18. Add a new rule at a specific number:
    $ sudo ufw insert 1 allow 5222/tcp	# Inserts a rule at number 1
    
  19. If you want to reject outgoing FTP connections, you can use the following command:
    $ sudo ufw reject out ftp
    
  20. UFW also supports application profiles. To view all application profiles, use the following command:
    $ sudo ufw app list
    
  21. Get more information about the app profile using the following command:
    $ sudo ufw app info OpenSSH
    
  22. Allow the application profile as follows:
    $ sudo ufw allow OpenSSH
    
  23. Set ufw logging levels [off|low|medium|high|full] with the help of the following command:
    $ sudo ufw logging medium
    
  24. View firewall reports with the show parameter:
    $ sudo ufw show added    # list of rules added
    $ sudo ufw show raw    # show complete firewall
    
  25. Reset ufw to its default state (all rules will be backed up by UFW):
    $ sudo ufw reset
    

There's more…

UFW also provides various configuration files that can be used:

  • /etc/default/ufw: This is the main configuration file.
  • /etc/ufw/sysctl.conf: These are the kernel network variables. Variables in this file override variables in /etc/sysctl.conf.
  • /var/lib/ufw/user[6].rules or /lib/ufw/user[6].rules are the rules added via the ufw command.
  • /etc/ufw/before.init are the scripts to be run before the UFW initialization.
  • /etc/ufw/after.init are the scripts to be run after the UFW initialization.

See also

  • Check logging section of the UFW community page for an explanation of UFW logs at https://help.ubuntu.com/community/UFW
  • Check out the UFW manual pages with the following command:
    $ man ufw
    

There's more…

UFW also provides various configuration files that can be used:

  • /etc/default/ufw: This is the main configuration file.
  • /etc/ufw/sysctl.conf: These are the kernel network variables. Variables in this file override variables in /etc/sysctl.conf.
  • /var/lib/ufw/user[6].rules or /lib/ufw/user[6].rules are the rules added via the ufw command.
  • /etc/ufw/before.init are the scripts to be run before the UFW initialization.
  • /etc/ufw/after.init are the scripts to be run after the UFW initialization.

See also

  • Check logging section of the UFW community page for an explanation of UFW logs at https://help.ubuntu.com/community/UFW
  • Check out the UFW manual pages with the following command:
    $ man ufw
    

See also

  • Check logging section of the UFW community page for an explanation of UFW logs at https://help.ubuntu.com/community/UFW
  • Check out the UFW manual pages with the following command:
    $ man ufw
    

Securing against brute force attacks

So you have installed minimal setup of Ubuntu, you have setup SSH with public key authentication and disabled password authentication, and you have also allowed only single non-root user to access the server. You also configured a firewall, spending an entire night understanding the rules, and blocked everything except a few required ports. Now does this mean that your server is secured and you are free to take a nice sound sleep? Nope.

Servers are exposed to the public network, and the SSH daemon itself, which is probably the only service open, and can be vulnerable to attacks. If you monitor the application logs and access logs, you can find repeated systematic login attempts that represent brute force attacks.

Fail2ban is a service that can help you monitor logs in real time and modify iptables rules to block suspected IP addresses. It is an intrusion-prevention framework written in Python. It can be set to monitor logs for SSH daemon and web servers. In this recipe, we will discuss how to install and configure fail2ban.

Getting ready

You will need access to a root account or an account with similar privileges.

How to do it…

Follow these steps to secure against brute force attacks:

  1. Fail2ban is available in the Ubuntu package repository, so we can install it with a single command, as follows:
    $ sudo apt-get update
    $ sudo apt-get install fail2ban
    
  2. Create a copy of the fail2ban configuration file for local modifications:
    $ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
    
  3. Open a new configuration file in your favorite editor:
    $ sudo nano /etc/fail2ban/jail.local
    
  4. You may want to modify the settings listed under the [DEFAULT] section:
    How to do it…
  5. Add your IP address to the ignore IP list.
  6. Next, set your e-mail address if you wish to receive e-mail notifications of the ban action:
    destemail = [email protected]
    sendername = Fail2Ban
    mta = sendmail
    
  7. Set the required value for the action parameter:
    action = $(action_mwl)s
    
  8. Enable services you want to be monitored by setting enable=true for each service. SSH service is enabled by default:
    [ssh]
    enable = true
    
    How to do it…
  9. Set other parameters if you want to override the default settings.
  10. Fail2ban provides default configuration options for various applications. These configurations are disabled by default. You can enable them depending on your requirement.
  11. Restart the fail2ban service:
    $ sudo service fail2ban restart
    
  12. Check iptables for the rules created by fail2ban:
    $ sudo iptables -S
    
    How to do it…
  13. Try some failed SSH login attempts, preferably from some other system.
  14. Check iptables again. You should find new rules that reject the IP address with failed login attempts:
    How to do it…

How it works…

Fail2ban works by monitoring the specified log files as they are modified with new log entries. It uses regular expressions called filters to detect log entries that match specific criteria, such as failed login attempts. Default installation of fail2ban provides various filters that can be found in the /etc/fail2ban/filter.d directory. You can always create your own filters and use them to detect log entries that match your criteria.

Once it detects multiple logs matching with the configured filters within the specified timeout, fail2ban adjusts the firewall settings to reject the matching IP address for configured time period.

There's more…

Check out the article about defending against brute force attacks at http://www.la-samhna.de/library/brutessh.html.

The preceding articles shows multiple options to defend against SSH brute force attacks. As mentioned in the article, you can use iptables to slow down brute force attacks by blocking IP addresses:

$ iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j LOG --log-prefix "SSH_brute_force "
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP

These commands will create an iptables rule to permit only three SSH login attempts per minute. After three attempts, whether they are successful or not, the attempting IP address will be blocked for another 60 seconds.

Getting ready

You will need access to a root account or an account with similar privileges.

How to do it…

Follow these steps to secure against brute force attacks:

  1. Fail2ban is available in the Ubuntu package repository, so we can install it with a single command, as follows:
    $ sudo apt-get update
    $ sudo apt-get install fail2ban
    
  2. Create a copy of the fail2ban configuration file for local modifications:
    $ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
    
  3. Open a new configuration file in your favorite editor:
    $ sudo nano /etc/fail2ban/jail.local
    
  4. You may want to modify the settings listed under the [DEFAULT] section:
    How to do it…
  5. Add your IP address to the ignore IP list.
  6. Next, set your e-mail address if you wish to receive e-mail notifications of the ban action:
    destemail = [email protected]
    sendername = Fail2Ban
    mta = sendmail
    
  7. Set the required value for the action parameter:
    action = $(action_mwl)s
    
  8. Enable services you want to be monitored by setting enable=true for each service. SSH service is enabled by default:
    [ssh]
    enable = true
    
    How to do it…
  9. Set other parameters if you want to override the default settings.
  10. Fail2ban provides default configuration options for various applications. These configurations are disabled by default. You can enable them depending on your requirement.
  11. Restart the fail2ban service:
    $ sudo service fail2ban restart
    
  12. Check iptables for the rules created by fail2ban:
    $ sudo iptables -S
    
    How to do it…
  13. Try some failed SSH login attempts, preferably from some other system.
  14. Check iptables again. You should find new rules that reject the IP address with failed login attempts:
    How to do it…

How it works…

Fail2ban works by monitoring the specified log files as they are modified with new log entries. It uses regular expressions called filters to detect log entries that match specific criteria, such as failed login attempts. Default installation of fail2ban provides various filters that can be found in the /etc/fail2ban/filter.d directory. You can always create your own filters and use them to detect log entries that match your criteria.

Once it detects multiple logs matching with the configured filters within the specified timeout, fail2ban adjusts the firewall settings to reject the matching IP address for configured time period.

There's more…

Check out the article about defending against brute force attacks at http://www.la-samhna.de/library/brutessh.html.

The preceding articles shows multiple options to defend against SSH brute force attacks. As mentioned in the article, you can use iptables to slow down brute force attacks by blocking IP addresses:

$ iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j LOG --log-prefix "SSH_brute_force "
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP

These commands will create an iptables rule to permit only three SSH login attempts per minute. After three attempts, whether they are successful or not, the attempting IP address will be blocked for another 60 seconds.

How to do it…

Follow these steps to secure against brute force attacks:

  1. Fail2ban is available in the Ubuntu package repository, so we can install it with a single command, as follows:
    $ sudo apt-get update
    $ sudo apt-get install fail2ban
    
  2. Create a copy of the fail2ban configuration file for local modifications:
    $ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
    
  3. Open a new configuration file in your favorite editor:
    $ sudo nano /etc/fail2ban/jail.local
    
  4. You may want to modify the settings listed under the [DEFAULT] section:
    How to do it…
  5. Add your IP address to the ignore IP list.
  6. Next, set your e-mail address if you wish to receive e-mail notifications of the ban action:
    destemail = [email protected]
    sendername = Fail2Ban
    mta = sendmail
    
  7. Set the required value for the action parameter:
    action = $(action_mwl)s
    
  8. Enable services you want to be monitored by setting enable=true for each service. SSH service is enabled by default:
    [ssh]
    enable = true
    
    How to do it…
  9. Set other parameters if you want to override the default settings.
  10. Fail2ban provides default configuration options for various applications. These configurations are disabled by default. You can enable them depending on your requirement.
  11. Restart the fail2ban service:
    $ sudo service fail2ban restart
    
  12. Check iptables for the rules created by fail2ban:
    $ sudo iptables -S
    
    How to do it…
  13. Try some failed SSH login attempts, preferably from some other system.
  14. Check iptables again. You should find new rules that reject the IP address with failed login attempts:
    How to do it…

How it works…

Fail2ban works by monitoring the specified log files as they are modified with new log entries. It uses regular expressions called filters to detect log entries that match specific criteria, such as failed login attempts. Default installation of fail2ban provides various filters that can be found in the /etc/fail2ban/filter.d directory. You can always create your own filters and use them to detect log entries that match your criteria.

Once it detects multiple logs matching with the configured filters within the specified timeout, fail2ban adjusts the firewall settings to reject the matching IP address for configured time period.

There's more…

Check out the article about defending against brute force attacks at http://www.la-samhna.de/library/brutessh.html.

The preceding articles shows multiple options to defend against SSH brute force attacks. As mentioned in the article, you can use iptables to slow down brute force attacks by blocking IP addresses:

$ iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j LOG --log-prefix "SSH_brute_force "
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP

These commands will create an iptables rule to permit only three SSH login attempts per minute. After three attempts, whether they are successful or not, the attempting IP address will be blocked for another 60 seconds.

How it works…

Fail2ban works by monitoring the specified log files as they are modified with new log entries. It uses regular expressions called filters to detect log entries that match specific criteria, such as failed login attempts. Default installation of fail2ban provides various filters that can be found in the /etc/fail2ban/filter.d directory. You can always create your own filters and use them to detect log entries that match your criteria.

Once it detects multiple logs matching with the configured filters within the specified timeout, fail2ban adjusts the firewall settings to reject the matching IP address for configured time period.

There's more…

Check out the article about defending against brute force attacks at http://www.la-samhna.de/library/brutessh.html.

The preceding articles shows multiple options to defend against SSH brute force attacks. As mentioned in the article, you can use iptables to slow down brute force attacks by blocking IP addresses:

$ iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j LOG --log-prefix "SSH_brute_force "
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP

These commands will create an iptables rule to permit only three SSH login attempts per minute. After three attempts, whether they are successful or not, the attempting IP address will be blocked for another 60 seconds.

There's more…

Check out the article about defending against brute force attacks at http://www.la-samhna.de/library/brutessh.html.

The preceding articles shows multiple options to defend against SSH brute force attacks. As mentioned in the article, you can use iptables to slow down brute force attacks by blocking IP addresses:

$ iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j LOG --log-prefix "SSH_brute_force "
$ iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP

These commands will create an iptables rule to permit only three SSH login attempts per minute. After three attempts, whether they are successful or not, the attempting IP address will be blocked for another 60 seconds.

Discussing Ubuntu security best practices

In this recipe, we will look at some best practices to secure Ubuntu systems. Linux is considered to be a well secured operating system. It is quite easy to maintain the security and protect our systems from unauthorized access by following a few simple norms or rules.

Getting ready

You will need access to a root or account with sudo privileges. These steps are intended for a new server setup. You can apply them selectively for the servers already in productions.

How to do it…

Follow these steps to discuss Ubuntu security best practices:

  1. Install updates from the Ubuntu repository. You can install all the available updates or just select security updates, depending on your choice and requirement:
    $ sudo apt-get update
    $ sudo apt-get upgrade
    
  2. Change the root password; set a strong and complex root password and note it down somewhere. You are not going to use it every day:
    $ sudo passwd
    
  3. Add a new user account and set a strong password for it. You can skip this step if the server has already set up a non-root account, like Ubuntu:
    $ sudo adduser john
    $ sudo passwd john
    
  4. Add a new user to the Sudoers group:
    $ sudo adduser john sudo
    
  5. Enable the public key authentication over SSH and import your public key to new user's authorized_keys file.
  6. Restrict SSH logins:
    1. Change the default SSH port:
      	port 2222
      
    2. Disable root login over SSH:
      	PermitRootLogin no
      
    3. Disable password authentication:
      	PasswordAuthentication no
      
    4. Restrict users and allow IP address:
      	AllowUsers john@(your-ip) john@(other-ip)
      
  7. Install fail2ban to protect against brute force attacks and set a new SSH port in the fail2ban configuration:
    $ sudo apt-get install fail2ban
    
  8. Optionally, install UFW and allow your desired ports:
    $ sudo ufw allow from <your-IP> to any port 22 proto tcp
    $ sudo ufw allow 80/tcp
    $ sudo ufw enable
    
  9. Maintain periodic snapshots (full-disk backups) of your server. Many cloud service providers offer basic snapshot tools.
  10. Keep an eye on application and system logs. You may like to set up log-monitoring scripts that will e-mail any unidentified log entry.

How it works…

The preceding steps are basic and general security measures. They may change according to your server setup, package selection, and the services running on your server. I will try to cover some more details about specific scenarios. Also, I have not mentioned application-specific security practices for web servers and database servers. A separate recipe will be included in the respective chapters. Again, these configurations may change with your setup.

The steps listed earlier can be included in a single shell script and executed at first server boot up. Some cloud providers offer an option to add scripts to be executed on the first run of the server. You can also use centralized configuration tools such as Ansible, Chef/Puppet, and some others. Again, these tools come with their own security risks and increase total attack surface. This is a tradeoff between ease of setup and server security. Make sure that you select a well-known tool if you choose this route.

I have also mentioned creating single user account, except root. I am assuming that you are setting up your production server. With production servers, it is always a good idea to restrict access to one or two system administrators. For production servers, I don't believe in setting up multiple user accounts just for accountability or even setting LDAP-like centralized authentication methods to manage user accounts. This is a production environment and not your backyard. Moreover, if you follow the latest trends in immutable infrastructure concepts, then you should not allow even a single user to interfere with your live servers. Again, your mileage may vary.

Another thing that is commonly recommended is to set up automated and unattended security updates. This depends on how trusted your update source is. You live in a world powered by open source tools where things can break. You don't want things to go haywire without even touching the servers. I would recommend setting up unattended updates on your staging or test environment and then periodically installing updates on live servers, manually. Always have a snapshot of the working setup as your plan B.

You may want to skip host-based firewalls such as UFW when you have specialized firewalls protecting your network. As long as the servers are not directly exposed to the Internet, you can skip the local firewalls.

Minimize installed packages and service on single server. Remember the Unix philosophy, do one thing and do it well, and follow it. By minimizing the installed packages, you will effectively reduce the attack surface, and maybe save little on resources too. Think of it as a house with a single door verses a house with multiple doors. Also, running single service from one server provides layered security. This way, if a single server is compromised, the rest of your infrastructure remains in a safe state.

Remember that with all other tradeoffs in place, you cannot design a perfectly secured system, there is always a possibility that someone will break in. Direct your efforts to increase the time required for an attacker to break into your servers.

See also

Getting ready

You will need access to a root or account with sudo privileges. These steps are intended for a new server setup. You can apply them selectively for the servers already in productions.

How to do it…

Follow these steps to discuss Ubuntu security best practices:

  1. Install updates from the Ubuntu repository. You can install all the available updates or just select security updates, depending on your choice and requirement:
    $ sudo apt-get update
    $ sudo apt-get upgrade
    
  2. Change the root password; set a strong and complex root password and note it down somewhere. You are not going to use it every day:
    $ sudo passwd
    
  3. Add a new user account and set a strong password for it. You can skip this step if the server has already set up a non-root account, like Ubuntu:
    $ sudo adduser john
    $ sudo passwd john
    
  4. Add a new user to the Sudoers group:
    $ sudo adduser john sudo
    
  5. Enable the public key authentication over SSH and import your public key to new user's authorized_keys file.
  6. Restrict SSH logins:
    1. Change the default SSH port:
      	port 2222
      
    2. Disable root login over SSH:
      	PermitRootLogin no
      
    3. Disable password authentication:
      	PasswordAuthentication no
      
    4. Restrict users and allow IP address:
      	AllowUsers john@(your-ip) john@(other-ip)
      
  7. Install fail2ban to protect against brute force attacks and set a new SSH port in the fail2ban configuration:
    $ sudo apt-get install fail2ban
    
  8. Optionally, install UFW and allow your desired ports:
    $ sudo ufw allow from <your-IP> to any port 22 proto tcp
    $ sudo ufw allow 80/tcp
    $ sudo ufw enable
    
  9. Maintain periodic snapshots (full-disk backups) of your server. Many cloud service providers offer basic snapshot tools.
  10. Keep an eye on application and system logs. You may like to set up log-monitoring scripts that will e-mail any unidentified log entry.

How it works…

The preceding steps are basic and general security measures. They may change according to your server setup, package selection, and the services running on your server. I will try to cover some more details about specific scenarios. Also, I have not mentioned application-specific security practices for web servers and database servers. A separate recipe will be included in the respective chapters. Again, these configurations may change with your setup.

The steps listed earlier can be included in a single shell script and executed at first server boot up. Some cloud providers offer an option to add scripts to be executed on the first run of the server. You can also use centralized configuration tools such as Ansible, Chef/Puppet, and some others. Again, these tools come with their own security risks and increase total attack surface. This is a tradeoff between ease of setup and server security. Make sure that you select a well-known tool if you choose this route.

I have also mentioned creating single user account, except root. I am assuming that you are setting up your production server. With production servers, it is always a good idea to restrict access to one or two system administrators. For production servers, I don't believe in setting up multiple user accounts just for accountability or even setting LDAP-like centralized authentication methods to manage user accounts. This is a production environment and not your backyard. Moreover, if you follow the latest trends in immutable infrastructure concepts, then you should not allow even a single user to interfere with your live servers. Again, your mileage may vary.

Another thing that is commonly recommended is to set up automated and unattended security updates. This depends on how trusted your update source is. You live in a world powered by open source tools where things can break. You don't want things to go haywire without even touching the servers. I would recommend setting up unattended updates on your staging or test environment and then periodically installing updates on live servers, manually. Always have a snapshot of the working setup as your plan B.

You may want to skip host-based firewalls such as UFW when you have specialized firewalls protecting your network. As long as the servers are not directly exposed to the Internet, you can skip the local firewalls.

Minimize installed packages and service on single server. Remember the Unix philosophy, do one thing and do it well, and follow it. By minimizing the installed packages, you will effectively reduce the attack surface, and maybe save little on resources too. Think of it as a house with a single door verses a house with multiple doors. Also, running single service from one server provides layered security. This way, if a single server is compromised, the rest of your infrastructure remains in a safe state.

Remember that with all other tradeoffs in place, you cannot design a perfectly secured system, there is always a possibility that someone will break in. Direct your efforts to increase the time required for an attacker to break into your servers.

See also

How to do it…

Follow these steps to discuss Ubuntu security best practices:

  1. Install updates from the Ubuntu repository. You can install all the available updates or just select security updates, depending on your choice and requirement:
    $ sudo apt-get update
    $ sudo apt-get upgrade
    
  2. Change the root password; set a strong and complex root password and note it down somewhere. You are not going to use it every day:
    $ sudo passwd
    
  3. Add a new user account and set a strong password for it. You can skip this step if the server has already set up a non-root account, like Ubuntu:
    $ sudo adduser john
    $ sudo passwd john
    
  4. Add a new user to the Sudoers group:
    $ sudo adduser john sudo
    
  5. Enable the public key authentication over SSH and import your public key to new user's authorized_keys file.
  6. Restrict SSH logins:
    1. Change the default SSH port:
      	port 2222
      
    2. Disable root login over SSH:
      	PermitRootLogin no
      
    3. Disable password authentication:
      	PasswordAuthentication no
      
    4. Restrict users and allow IP address:
      	AllowUsers john@(your-ip) john@(other-ip)
      
  7. Install fail2ban to protect against brute force attacks and set a new SSH port in the fail2ban configuration:
    $ sudo apt-get install fail2ban
    
  8. Optionally, install UFW and allow your desired ports:
    $ sudo ufw allow from <your-IP> to any port 22 proto tcp
    $ sudo ufw allow 80/tcp
    $ sudo ufw enable
    
  9. Maintain periodic snapshots (full-disk backups) of your server. Many cloud service providers offer basic snapshot tools.
  10. Keep an eye on application and system logs. You may like to set up log-monitoring scripts that will e-mail any unidentified log entry.

How it works…

The preceding steps are basic and general security measures. They may change according to your server setup, package selection, and the services running on your server. I will try to cover some more details about specific scenarios. Also, I have not mentioned application-specific security practices for web servers and database servers. A separate recipe will be included in the respective chapters. Again, these configurations may change with your setup.

The steps listed earlier can be included in a single shell script and executed at first server boot up. Some cloud providers offer an option to add scripts to be executed on the first run of the server. You can also use centralized configuration tools such as Ansible, Chef/Puppet, and some others. Again, these tools come with their own security risks and increase total attack surface. This is a tradeoff between ease of setup and server security. Make sure that you select a well-known tool if you choose this route.

I have also mentioned creating single user account, except root. I am assuming that you are setting up your production server. With production servers, it is always a good idea to restrict access to one or two system administrators. For production servers, I don't believe in setting up multiple user accounts just for accountability or even setting LDAP-like centralized authentication methods to manage user accounts. This is a production environment and not your backyard. Moreover, if you follow the latest trends in immutable infrastructure concepts, then you should not allow even a single user to interfere with your live servers. Again, your mileage may vary.

Another thing that is commonly recommended is to set up automated and unattended security updates. This depends on how trusted your update source is. You live in a world powered by open source tools where things can break. You don't want things to go haywire without even touching the servers. I would recommend setting up unattended updates on your staging or test environment and then periodically installing updates on live servers, manually. Always have a snapshot of the working setup as your plan B.

You may want to skip host-based firewalls such as UFW when you have specialized firewalls protecting your network. As long as the servers are not directly exposed to the Internet, you can skip the local firewalls.

Minimize installed packages and service on single server. Remember the Unix philosophy, do one thing and do it well, and follow it. By minimizing the installed packages, you will effectively reduce the attack surface, and maybe save little on resources too. Think of it as a house with a single door verses a house with multiple doors. Also, running single service from one server provides layered security. This way, if a single server is compromised, the rest of your infrastructure remains in a safe state.

Remember that with all other tradeoffs in place, you cannot design a perfectly secured system, there is always a possibility that someone will break in. Direct your efforts to increase the time required for an attacker to break into your servers.

See also

How it works…

The preceding steps are basic and general security measures. They may change according to your server setup, package selection, and the services running on your server. I will try to cover some more details about specific scenarios. Also, I have not mentioned application-specific security practices for web servers and database servers. A separate recipe will be included in the respective chapters. Again, these configurations may change with your setup.

The steps listed earlier can be included in a single shell script and executed at first server boot up. Some cloud providers offer an option to add scripts to be executed on the first run of the server. You can also use centralized configuration tools such as Ansible, Chef/Puppet, and some others. Again, these tools come with their own security risks and increase total attack surface. This is a tradeoff between ease of setup and server security. Make sure that you select a well-known tool if you choose this route.

I have also mentioned creating single user account, except root. I am assuming that you are setting up your production server. With production servers, it is always a good idea to restrict access to one or two system administrators. For production servers, I don't believe in setting up multiple user accounts just for accountability or even setting LDAP-like centralized authentication methods to manage user accounts. This is a production environment and not your backyard. Moreover, if you follow the latest trends in immutable infrastructure concepts, then you should not allow even a single user to interfere with your live servers. Again, your mileage may vary.

Another thing that is commonly recommended is to set up automated and unattended security updates. This depends on how trusted your update source is. You live in a world powered by open source tools where things can break. You don't want things to go haywire without even touching the servers. I would recommend setting up unattended updates on your staging or test environment and then periodically installing updates on live servers, manually. Always have a snapshot of the working setup as your plan B.

You may want to skip host-based firewalls such as UFW when you have specialized firewalls protecting your network. As long as the servers are not directly exposed to the Internet, you can skip the local firewalls.

Minimize installed packages and service on single server. Remember the Unix philosophy, do one thing and do it well, and follow it. By minimizing the installed packages, you will effectively reduce the attack surface, and maybe save little on resources too. Think of it as a house with a single door verses a house with multiple doors. Also, running single service from one server provides layered security. This way, if a single server is compromised, the rest of your infrastructure remains in a safe state.

Remember that with all other tradeoffs in place, you cannot design a perfectly secured system, there is always a possibility that someone will break in. Direct your efforts to increase the time required for an attacker to break into your servers.

See also

See also

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image