XOR DDOS Mitigation and Analysis

Published: 2015-06-23
Last Updated: 2015-06-23 04:55:05 UTC
by Kevin Shortt (Version: 1)
6 comment(s)

XOR DDOS Trojan Trouble


I have struggled over the past recent months with a clients environment becoming infected and reinfected with an XOR DDOS trojan.  The disruption and reinfection rates were costly at times.  The client in question is a small business with limited resources.  Since the systems would get reinfected, a baited system was eventually put into place to determine the incoming vector.  It was not proven, but believed that ssh brute force was the incoming vector of attack.  Once the attackers were onto the server, a root kit trojan was used.  A very intelligent one.  I highly recommend that anyone that gets nabbed by this trojan or one like it reinstall your operating system as soon as possible and execute my prevention steps outlined below.

However, there are some circumstances that require mitigation before available resources can be used for reinstall/replacement and prevention measures.  The client was in a situation where taking the system offline was not an immediate option.  I placed some really great links below. [1] [2] [3]   They review, analyze and fully confirm what we were experiencing was the same.  There were some minor differences.   However, they never really offered a short term mitigation path to follow.  Only somewhere in a comment on a forum (possibly one of the three articles below), did someone make a suggestion to change the file/directory attributes to assist in mitigation.  It was only a suggestion with no further follow-up.  Mitigation of this trojan was difficult as it was intelligent enough to always restart when it was killed, which included help from crontab entries every three minutes.  It was also observed the executable sometimes was hidden very well in the process table.

Basic MO

The victim server was CentOS 6.5 system with a basic LAMP setup, that offered ssh and VSFTP services.  Iptables was in use, but NOT SELinux.  It is my untested claim that SELinux likely would have prevented this trojan from taking hold.   I am not an SELinux user/expert so I was unable to take time to add it to this environment.   

The original malware was discovered in /lib/libgcc4.so .  This exe was perpetuated via cron in /etc/crontab every three minutes.  
( */3 * * * * root /etc/cron.hourly/udev.sh )
If crontab gets cleaned and an executable is still running, then the crontab will be repopulated on Friday night around midnight.  (remember that sometimes the exe was hidden well and was overlooked)

The malware creates random string startup scripts and places them in /etc/init.d/* .  You need only to execute ls -lrt /etc/init.d/* to discover some evidence.  Along with the use of the top utility, you can determine how many are running.  If the startups are deleted, then more executables and startup scripts will be created and begin to run as well.

The malware itself was used as a DDOS agent.  It took commands from a C&C.  The IP addresses it would communicate with were available from the strings output of the executable.  When the malware agent was called into action, the entire server and local pipe was saturated and consequently cut off from service.

Mitigation

The following steps were taken for mitigation.   The only thing that prevented the recreation of the malware was the use of the chattr command.   Adding the immutable bit to the /etc/init.d and /lib directories were helpful in preventing the malware from repopulating.  I put together the following for loop script and added the following IP addresses to IP tables to drop all communication.    The for loop consists of clean up of four running processes.  I used ls and top to determine the for loop arguments and PID's used in the kill command.   I through the following into a script called runit.sh and executed it.

For loop:
for f in zyjuzaaame lcmowpgenr belmyowxlc aqewcdyppt
do
   mv /etc/init.d/$f /tmp/ddos/
   rm -f /etc/cron.hourly/udev.sh
   rm -f /var/run/udev.pid
   mv /lib/libgcc4.so /tmp/ddos/libgcc4.so.$f
   chattr -R +i /lib
   chattr -R +i /etc/init.d
   kill -9 19897 19890 19888 19891
done


IP Addresses to drop all traffic:   103.25.9.228  103.25.9.229

Prevention

I now keep the immutable bit set on /lib on a clean system.  It turn it off before patching and software installs, in the event the /lib directory is needed for updating.  

I also recommend installing fail2ban and configuring it to watch many of your services.  I have it currently watching apache logs, ssh, vsftp, webmail, etc.   It really seems to be hitting the mark for prevention.   There is a whitelist feature to ignore traffic from a given IP or IP range.   This helps to keep the annoying customers from becoming a nag.

If you have experienced anything like the above, then please feel free to share.  This analysis is only scratching the surface.  The links below do a much deeper dive on this piece of malware.  

 
[1] https://www.fireeye.com/blog/threat-research/2015/02/anatomy_of_a_brutef.html
[2] https://blog.avast.com/2015/01/06/linux-ddos-trojan-hiding-itself-with-an-embedded-rootkit/#more-33072
[3] http://blog.malwaremustdie.org/2014/09/mmd-0028-2014-fuzzy-reversing-new-china.html


-Kevin
--
ISC Handler on Duty

6 comment(s)

Comments

Usually it's easier to just sent the process a SIGSTOP instead of a SIGKILL (does not require the chattr approach). Then you can clean up and send a kill to the process afterwards.
This DDoS trojan is around for quite some time, IIRC there are also stackexchange discussions about it.
I also failed to narrow down the initial attack vector. We suspected some outdated php application plus some local root exploit at the time.
This has been around quite some time. My first encounter with it was Aug 2014.

I did encounter that a SIGSTOP would only be effective if you never killed the process. (left in perpetual pause) Once the process was killed however, new spawn would occur. (for my variants)
The comments in the third reference have the chattr -i fix. It seems there is more than one way to skin this cat.
Thanks for chiming in jbmoore. For clarification purposes, my prevention method after the rebuild/reinstall of the server, only included the directory. The recursive flag was used during mitigation and triage.

chattr +i /lib

This is has proven effective and cleaner from a management perspective.
"Usually it's easier to just sent the process a SIGSTOP instead of a SIGKILL (does not require the chattr approach). Then you can clean up and send a kill to the process afterwards."

Doesn't work as well as one might think. Many of these make a copy before executing. The process goes like this.

1. Load binary_1 in memory.
2. Copy binary_1 to binary_2
3. Remove all references to binary_1 and replace with references to binary_2
4. Lather rinse repeat.

The best way I've found (been doing an insane amount of work on the actors behind this) is to reboot into single user mode and remove the stuff from the /etc/init.d and all related binaries.

Zach W.
Good write up for those that want a summary of what's happening... Well done.

I recently investigated a system that was compromised via a dictionary attack on the root account via the sshd service.

The M.O. was basically:
* Dictionary attack that ended with a successful login on the root account
* This session was ended immediately after the successful login
* ~19.5 hours later a new IP logged into the root account in a single attempt (Q's: [1] Was the password shared between attackers or [2] Is the attacker in control of a bigger set of infrastructure than we'd like? Who knows...)
* After the second login, the standard MO was followed with regards to malware installation

I would agree with the /lib and /lib64 immutable bit, but this for me is a stage too late. the attacker already has root on your system. its only a matter of time until they google and find what we are doing that is breaking his/her install script and alters it to remove the immutable bit before installation...

fail2ban is a good way to mitigate many different types of brute-force/DoS attacks but I also don't believe its a silver bullet for this situation either...

In this case, a best practice approach would have saved my client:
* Don't let root ssh in. Period.
* Normal users must only be able to SSH in with a certificate and never with a password
* If you must have a root password (for console access for example) use a generated one that is of significant length (16 Chars at least) and save it in a trusted password safe (LastPass for example). It needs to be huge, it should never actually be used except in an emergency.

In our case it was easy to destroy and rebuild all infected systems so this is the route we went.

Public Indicators:
* Dictionary attack source - 59.63.188.44
* Infection source - 175.126.82.235

Diary Archives