Skip to main content

Snort as NIDS. Installation and configuration Step by Step.


Install Required Dependencies
#############################

apt-get update -y

apt-get upgrade -y

apt-get install openssh-server ethtool build-essential libpcap-dev libpcre3-dev libdumbnet-dev bison flex zlib1g-dev liblzma-dev openssl libssl-dev

wget https://www.snort.org/downloads/snort/daq-2.0.6.tar.gz

tar -zxvf daq-2.0.6.tar.gz

cd cd daq-2.0.6

./configure && make && make install


Install Snort from Source:
##########################


wget https://www.snort.org/downloads/snort/snort-2.9.11.1.tar.gz

tar -xvzf snort-2.9.11.1.tar.gz

cd snort-2.9.11.1

./configure --enable-sourcefire && make && make install

ldconfig

ln -s /usr/local/bin/snort /usr/sbin/snort

snort -V


Configure Snort
###############
mkdir /etc/snort
mkdir /etc/snort/preproc_rules
mkdir /etc/snort/rules
mkdir /var/log/snort
mkdir /usr/local/lib/snort_dynamicrules
touch /etc/snort/rules/white_list.rules
touch /etc/snort/rules/black_list.rules
touch /etc/snort/rules/local.rules

chmod -R 5775 /etc/snort/
chmod -R 5775 /var/log/snort/
chmod -R 5775 /usr/local/lib/snort
chmod -R 5775 /usr/local/lib/snort_dynamicrules/

cd /usr/share/doc/snort-2.9.11.1/etc
cp -avr *.conf *.map *.dtd *.config /etc/snort/

cd ..

cp -avr src/dynamic-preprocessors/build/usr/local/lib/snort_dynamicpreprocessor/* /usr/local/lib/snort_dynamicpreprocessor/

sed -i "s/include \$RULE\_PATH/#include \$RULE\_PATH/" /etc/snort/snort.conf

vi /etc/snort/snort.conf

var RULE_PATH /etc/snort/rules
var SO_RULE_PATH /etc/snort/so_rules
var PREPROC_RULE_PATH /etc/snort/preproc_rules
var WHITE_LIST_PATH /etc/snort/rules
var BLACK_LIST_PATH /etc/snort/rules
include $RULE_PATH/local.rules


Validate the configuration file with the following command:

snort -T -i eth0 -c /etc/snort/snort.conf


This will give as :

Snort successfully validated the configuration!
Snort exiting

Testing Snort:
##############

vi /etc/snort/rules/local.rules

alert tcp any any -> $HOME_NET 21 (msg:"FTP connection attempt"; sid:1000001; rev:1;)
alert icmp any any -> $HOME_NET any (msg:"ICMP connection attempt"; sid:1000002; rev:1;)
alert tcp any any -> $HOME_NET 80 (msg:"TELNET connection attempt"; sid:1000003; rev:1;)

:wq!

Now start Snort in Network IDS mode from the terminal and tell it to output any alert to the console:

snort -A console -q -c /etc/snort/snort.conf -i eth0

Now if we ping the snort installed server's ip 192.168.11.59 it will give the below logs in the terminal.

04/19-16:59:44.826558  [**] [1:1000002:1] ICMP connection attempt [**] [Priority: 0] {ICMP} 192.168.10.117 -> 192.168.11.59
04/19-16:59:44.826631  [**] [1:1000002:1] ICMP connection attempt [**] [Priority: 0] {ICMP} 192.168.11.59 -> 192.168.10.117
04/19-16:59:45.831347  [**] [1:1000002:1] ICMP connection attempt [**] [Priority: 0] {ICMP} 192.168.10.117 -> 192.168.11.59


Now Finally Creating the Snort Startup Script:
##############################################

vi  /lib/systemd/system/snort.service

[Unit]
   Description=Snort NIDS Daemon
   After=syslog.target network.target
[Service]
   Type=simple
   ExecStart=/usr/local/bin/snort -q -c /etc/snort/snort.conf -i eth0
[Install]
  WantedBy=multi-user.target


safe and close.

Save the file, then enable the script to run at boot time:

systemctl enable snort
Finally, start Snort:

systemctl start snort
You can check the status of Snort by running the following command:

systemctl status snort
You should see the following output:

root@machinexx:~# systemctl status snort
● snort.service - Snort NIDS Daemon
   Loaded: loaded (/lib/systemd/system/snort.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2018-04-19 17:06:27 IST; 6s ago
 Main PID: 707 (snort)
   CGroup: /system.slice/snort.service
           └─707 /usr/local/bin/snort -q -c /etc/snort/snort.conf -i enp3s0

Apr 19 17:06:27 machine01.ndzhome.com systemd[1]: Started Snort NIDS Daemon.



Thanks,
That is all.






Comments

Popular posts from this blog

Password reset too simplistic/systematic issue

Some time when we try to reset the password of our user in linux it will show as simple and systematic as below: BAD PASSWORD: it is too simplistic/systematic no matter how hard password you give it will show the same. Solution: ######### Check if your password is Ok with the below command, jino@ndz~$ echo 'D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l' | cracklib-check D7y8HK#56r89lj&8*&^%&^%#56rlKJ!789l: it is too simplistic/systematic Now Create a password with the below command : jino@ndz~$ echo $(tr -dc '[:graph:]' 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K; You can see that this password will be ok with the cracklib-check. jino@ndz~$ echo '7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;' | cracklib-check                 7\xi%!W[y*S}g-H7W~gbEB4cv,9:E:K;: OK Thats all, Thanks.

K8s External Secrets integration between AWS EKS and Secrets Manager(SM) using IAM Role.

What is K8s External Secrets and how it will make your life easier? Before saying about External Secrets we will say about k8s secrets and how it will work. In k8s secrets we will create key value pairs of the secrets and set this as either pod env variables or mount them as volumes to pods. For more details about k8s secrets you can check my blog http://jinojoseph.blogspot.com/2020/08/k8s-secrets-explained.html   So in this case if developers wants to change the ENV variables , then we have to edit the k8s manifest yaml file, then we have to apply the new files to the deployment. This is a tiresome process and also chances of applying to the wrong context is high if you have multiple k8s clusters for dev / stage and Prod deployments. So in-order to make this easy , we can add all the secrets that is needed in the deployment, in the AWS Secret Manager and with the help of External secrets we can fetch and create those secrets in the k8s cluster. So what is K8s external Secret? It is an

Setting /etc/hosts entries during the initial deployment of an Application using k8s yaml file

Some times we have to enter specific hosts file entries to the container running inside the POD of a kubernetes deployment during the initial deployment stage itself. If these entries are not in place, the application env variables mentioned in the yaml file , as hostnames , will not resolve to the IP address and the application will not start properly. So to make sure the /etc/hosts file entries are already there after the spin up of the POD you can add the below entries in your yaml file. cat > api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: spec:   template:     metadata:     spec:       volumes:       containers:       - image: registryserver.jinojoseph.com:5000/jinojosephimage:v1.13         lifecycle:           postStart:             exec:               command: ["/bin/sh", "-c", "echo 10.0.1.10 namenode.jinojoseph.com >> /etc/hosts && echo 10.0.1.8 dn1.jinojoseph.com >> /etc/hosts &&