Without getting into the details of the attack itself, it turned out, at least initially, that the bots could be identified by a relatively small number of user-agent strings.
Googling the odder looking useragents showed us examples of similar attacks in the past using the same bot software. This indicated, as seems to be quite normal, that the infected machine are spoofing their IPs.
While our hosting provider was looking at the issue at a network level, made very difficult by the spoofed IPs, I tried an IIS host-level solution.
1.
I installed URLScan 3.1
http://www.iis.net/expand/UrlScan
2.
I then edited the site-wide
URLScan.ini
in C:\WINDOWS\system32\inetsrv\urlscan
Changing:
RuleList=
to
RuleList=DenyUserAgent
[DenyUserAgent]
DenyDataSection=AgentStrings
ScanHeaders=User-Agent
[AgentStrings]
Mozilla/5.0%20%28Win.....
where the
AgentStrings
lists the 'escaped' user-agent string that should be blocked.3.
Given the volume of entries in the logs, I also had to disable both the logging done by URLScan, as well as IIS's own logging of blocked requests.
3a.
So I flipped the value of
EnableLogging
in URLScan.ini
to '0' 3b.
And then executed the following two commands to disable logging of only the specific
/Rejected-By-UrlScan
pseudo-url in IIS logs:CSCRIPT %SYSTEMDRIVE%\Inetpub\AdminScripts\adsutil.vbs CREATE W3SVC/1/ROOT/Rejected-By-UrlScan IIsWebFile
then
CSCRIPT %SYSTEMDRIVE%\Inetpub\AdminScripts\adsutil.vbs SET W3SVC/1/ROOT/Rejected-By-UrlScan/DontLog
4.
Restarted the 'World Wide Web Publishing Service'.
Note:
URLScan does a substring match of entries listed in a custom rule's
DenyDataSection
. This would make it possible to block attacks if, for example, the attacker had a typo in one of the request headers. So you dont need to just match a complete header string.But it does not allow more complicated matching, e.g. using regular expressions. If you are looking for the flexibility of Apache's various modules and directives, URLScan is not the answer.
Conclusion:
This solution is only suitable for the most small-scale attacks. The attacker only has to update the user-agent strings. Or instead just increasing attack traffic would max-out the CPUs of the web servers, since it has to go through extra effort to parse the request headers. Increasing attack traffic could kill servers through maxing out the bandwidth available, or the connection limit. Host-based mitigation techniques like this are, in general, going to be ineffective against botnet attacks.
No comments:
Post a Comment