Friday, August 27, 2010

Net.Tcp WCF requires clientaccesspolicy.xml be available from IP, not hostname

Testing a new Net.Tcp WCF service, I got the following error:

"Could not connect to net.tcp://X:4502/XService. The connection attempt lasted for a time span of 00:00:00.4270427. TCP error code 10013: An attempt was made to access a socket in a way forbidden by its access permissions.. This could be due to attempting to access a service in a cross-domain way while the service is not configured for cross-domain access. You may need to contact the owner of the service to expose a sockets cross-domain policy over HTTP and host the service in the allowed sockets port range 4502-4534."

Although this is a very common error, in our case, it was not fixed by the obvious solutions:

1. clientaccesspolicy.xml was already available. I verified that through the browser.
2. clientaccesspolicy.xml already had the correct contents allowing port 4502.

It turned out that while clientaccesspolicy.xml was available via our server hostname, it was NOT available via the server IP address.

Even if the Silverlight client is configured to access a net.tcp WCF service using a valid DNS name, it still uses the IP address of that DNS name to load the clientaccesspolicy.

This happens EVEN IF it has already loaded the very same clientaccesspolicy.xml through the DNS name.

In our case at least, IIS7 by default does not allows access by IP for a site that has already been configured for for a DNS name.

So I added a new site in IIS Manager, bound to the IP address, pointing to the same folder already containing our clientaccesspolicy.xml file.

But, this new site must be bound to the internal IP (e.g. 192.168.1.xxx), NOT the external IP (e.g. 88.77.66.55).


After I discovered the problem I found this:

http://blogs.msdn.com/b/silverlightws/archive/2010/04/09/policy-file-for-nettcp.aspx

Wednesday, August 25, 2010

Net.Tcp WCF service requires 'Integrated' managed pipeline mode, not 'classic' mode

We built and successfully tested a Net.Tcp WCF service. But when deploying it to an off-site server for further testing, we discovered the IIS worker process was crashing.

In the event log, we got plenty of these:


Source: ASP.NET 4.0.30319.0
EventID: 1088
Description: 0x8000ffff Catastrophic failure


and some of these:


Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b
Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000
Exception code: 0xc0000005
Fault offset: 0x00000000
Faulting process id: 0xfe8
Faulting application start time: 0x01cb429f60c2c986
Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe
Faulting module path: unknown
Report Id: f85bf6d3-ae92-11df-a403-00219b00a9ca


We had to issue an 'iisreset' command from command line to bring it back.

It turns out, the offsite servers were running our application using a 'Classic' mode app-pool. But during development, we used 'Integrated' managed pipeline mode.

Changing the offsite servers to 'Integtated' seemed to fix the problem.

Thursday, August 12, 2010

ExecuteReader: Connection property not initialized

This is an inaccurate error message!!

Despite what the error message says, the following error can be caused by not setting the 'CommandText' property of the SqlCommand object:

System.InvalidOperationException: ExecuteReader: Connection property has not been initialized.

I noticed this while working with transactions.

In my case it was NOT caused by the 'Connection' property

Friday, June 18, 2010

Cygwin SSHD, Remote Tunnel, GatewayPorts

When opening a remote tunnel over SSH, I need to allow clients on machines other than the SSHD host access the port that has been opened.

To do this, the client needs to specify this option. e.g. in putty, set the 'Remote ports do the same' option.

AND

the SSHD server needs to have 'GatewayPorts' enabled in /etc/sshd_config

Thursday, June 17, 2010

Subversion: Trimming old commits, keeping revision numbers intact.

Our repository is getting quite large. Some time ago, we reorganised our repository, and none of the history before that reorganisation is really that important.

So, I decided to do a 'dump' of the repository from the earliest required revision to HEAD, and the load that into a fresh repository.

But I got the following warnings:
"Referencing data in revision [X], which is older than the oldest"
and
"Loading this dump into an empty repository will fail."

So I retried this a few times, moving back the 'start' revision until I no longer got this error. Luckily I didnt have to go back much further.

I then did a 'load' cammand of this new dump back into a new repository.

Next thing I noticed was that this new repository, while functionally correct, had different revision numbers. But the old revision numbers are referenced in our revsion tracking system, and in comments throughout out the version history.

How do I load back my dump file, with a starting revision matching the original repository?

I found I could put together a tiny script that inserts a series of trivial commits to the repository, filling in the required version numbers. Then when loading in my dump file, it's commits match the revision number of the original repository.

But since the new repository is not empty when loading the dump file, I need to set the UUID of the new repository to match the old repository, using --force-uuid

Its not the most elegant solution, but seems to work.

The following are the commands I used:

1.
svnadmin dump \SVN\ROOT -r 1282:HEAD > ROOT_1282.dump

2.
svnadmin create \SVN\ROOT2

3.
svn co file:///c:/SVN/ROOT2 ROOT2_WC

4.
echo 0 > ROOT2_WC\zzz

5.
svn add ROOT2_WC\zzz

6.
# NOTE: the following commits from r1 to r1280, since r1281 happens
# in step 8. Then the dump file starting revision (original r1282)
# matches the next revision in this new repository.
for /L %n in (1,1,1280) do (
echo %n > ROOT2_WC\zzz
svn commit ROOT2_WC\zzz -m "empty commit replaces original")


7.
svn delete ROOT2_WC\zzz

8.
svn commit ROOT2_WC -m "removing dummy file used for empty commits"

9.
svnadmin load --force-uuid ROOT2 < ROOT_1282.dump

Friday, May 14, 2010

SQL Server Transaction Log too big...

USE MyData
GO
DBCC SHRINKFILE(MyData_log, 1)
BACKUP LOG MyData WITH TRUNCATE_ONLY
DBCC SHRINKFILE(MyData_log, 1)
GO

Wednesday, April 21, 2010

What process opened a port on windows?

To see the process id of all open ports:
netstat -a -n -o

To the the process name of a particular process id:
tasklist /svc /FI "PID eq 2856"

I should really write a script to combine these...

Wednesday, March 10, 2010

Windows 7 Firewall, Limit SSH Access to Ireland only

I've cygwin SSHD running on one of my windows 7 machines. I've noticed connection attempts for places all over the world. I'd rather restrict access a bit, in case they are attacking an exploitable flaw, or one of my password are too weak ( I must set up key-only login auth)

Now that Windows 7 has a much improved firewall, I can now add rules that allow inbound access to port 22 to a limited set of remote IP addresses.

To limit it to Ireland only, for example, I looked up the full range of Irish IP addresses using this site: http://www.countryipblocks.net/

Then I ran the following from the command line (run as Administrator):

netsh advfirewall firewall add rule name="SSHD IN Ireland Only" dir=in localport=22 protocol=TCP action=allow remoteip=62.9.0.0/16,62.17.0.0/16,62.40.32.0/19,62.77.160.0/19,62.231.32.0/19,....

The full list of subnets is quite long, and I don't know what is the maximum number of entries allowed. Indeed, neither do I know the performance impact on networking in general, if any, of a large number of entries. If they've built the firewall properly, the impact should be negligible on unrelated connections.

Tuesday, February 23, 2010

DDOS, URLScan, Disable IIS Logging

We have had to deal with a distributed denial of service attack lately.

Without getting into the details of the attack itself, it turned out, at least initially, that the bots could be identified by a relatively small number of user-agent strings.

Googling the odder looking useragents showed us examples of similar attacks in the past using the same bot software. This indicated, as seems to be quite normal, that the infected machine are spoofing their IPs.

While our hosting provider was looking at the issue at a network level, made very difficult by the spoofed IPs, I tried an IIS host-level solution.

1.
I installed URLScan 3.1
http://www.iis.net/expand/UrlScan

2.
I then edited the site-wide URLScan.ini in C:\WINDOWS\system32\inetsrv\urlscan

Changing:
RuleList=
to
RuleList=DenyUserAgent

[DenyUserAgent]
DenyDataSection=AgentStrings
ScanHeaders=User-Agent

[AgentStrings]
Mozilla/5.0%20%28Win.....


where the AgentStrings lists the 'escaped' user-agent string that should be blocked.


3.
Given the volume of entries in the logs, I also had to disable both the logging done by URLScan, as well as IIS's own logging of blocked requests.


3a.
So I flipped the value of EnableLogging in URLScan.ini to '0'

3b.
And then executed the following two commands to disable logging of only the specific /Rejected-By-UrlScan pseudo-url in IIS logs:

CSCRIPT %SYSTEMDRIVE%\Inetpub\AdminScripts\adsutil.vbs CREATE W3SVC/1/ROOT/Rejected-By-UrlScan IIsWebFile
then
CSCRIPT %SYSTEMDRIVE%\Inetpub\AdminScripts\adsutil.vbs SET W3SVC/1/ROOT/Rejected-By-UrlScan/DontLog

4.
Restarted the 'World Wide Web Publishing Service'.

Note:
URLScan does a substring match of entries listed in a custom rule's DenyDataSection. This would make it possible to block attacks if, for example, the attacker had a typo in one of the request headers. So you dont need to just match a complete header string.

But it does not allow more complicated matching, e.g. using regular expressions. If you are looking for the flexibility of Apache's various modules and directives, URLScan is not the answer.


Conclusion:
This solution is only suitable for the most small-scale attacks. The attacker only has to update the user-agent strings. Or instead just increasing attack traffic would max-out the CPUs of the web servers, since it has to go through extra effort to parse the request headers. Increasing attack traffic could kill servers through maxing out the bandwidth available, or the connection limit. Host-based mitigation techniques like this are, in general, going to be ineffective against botnet attacks.

Friday, January 29, 2010

Encrypted Web.config on IIS 6.0, Win2k3

1.
created identity.aspx containing only the following:

<%@ Page Language="C#" %>
<%
Response.Write(System.Security.Principal.WindowsIdentity.GetCurrent().Name);
%>

In a browser, saw that the identity was
NT AUTHORITY\NETWORK SERVICE


2.
cd C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727


3.
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pa "NetFrameworkConfigurationKey" "NT AUTHORITY\NETWORK SERVICE"
Adding ACL for access to the RSA Key container...
The RSA key container was not found.
Failed!


4.
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pc "NetFrameworkConfigurationKey" -exp
Creating RSA Key container...
Succeeded!


5.
Tried step 3. again...

C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pa "NetFrameworkConfigurationKey" "NT AUTHORITY\NETWORK SERVICE"
Adding ACL for access to the RSA Key container...
Succeeded!


6.
I have two websites, on different ports, both on the root URL /, so to distinguish them when encrypting the connection strings, I uses the site ID ( Identifier field in IIS Manager Web Sites list),

C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pe "connectionStrings" -app "/" -site 1
Encrypting configuration section...
Succeeded!

C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pe "connectionStrings" -app "/" -site 219934440
Encrypting configuration section...
Succeeded!


7.
I verified in a text editor the Web.config sections had been changed, and also that the running application was still able to read the connection strings.


8.
I did the same for the machineKey:

C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pd "system.web/machineKey" -app "/" -site 219934440
Decrypting configuration section...
Succeeded!



9.
I tested decrypting the sections back to the originals:

C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pd "connectionStrings" -app "/" -site 1
Decrypting configuration section...
Succeeded!

C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727>aspnet_regiis -pd "connectionStrings" -app "/" -site 2054359653
Decrypting configuration section...
Succeeded!




All taken from:
http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx
http://social.msdn.microsoft.com/Forums/en/clr/thread/087df87f-8fb5-4e54-a57b-0bbdbc544c4f
http://forums.asp.net/p/960412/1423554.aspx#1423554

Friday, January 22, 2010

Semi-Transparent Windows

Sometimes I work on the command line, or am editing some file in a text editor. But I want to look at one window while I type in another. Even with my two screens, windows must sometimes overlap, but when this is necessary a semi-transparent foreground window can be handy, so the window behind can still be read.

Yes, Aero in windows lets your windows' border be semi-transparent. But it's only a thin border, and you still cant read what's behind as the background is blurred. I really dont understand the usability advantage of this feature. Seems a bit useless to me.

There's also the 'Glasser' extension for Firefox that makes Firefox's toolbars transparent in a similar way. But that only adds a bit of consistency, without much of a benefit.

Slightly more useful, is 'Glass CMD for Vista' that makes cmd.exe semi-transparent, but again, the background is blurred so you cant read what is behind. There is also 'Glass Notepad'.

What about disabling blur? It is possible with a registry hack, or by replacing DLLs on Windows 7. Neither of theas seem like a very clean solution.

Then I found Glass2k. And it even works on Windows 7 x64! I just press Ctrl-Shift-[1 to 9] to vary the transparency. And there is no blur. Perfect.