10 September 2008

Protecting us from our own passwords

Very early in the history of interactive logon with passwords, the big brains noticed that if someone was looking over your shoulder, they might see what you were typing. So they decided that whenever any system, anywhere, asks for a password, it has to be replaced by blobs or asterisks.

We've all become so used to this, that we don't realise how inappropriate it is for 99% of our daily interactions with computers. The vast majority of us will never encounter anyone trying to steal our password by looking over our shoulder, but I'm guessing that almost everybody reading this has been locked out of a system, site, or application to which they had legitimate access by a problem caused by not being able to see what you're typing.

There are many reasons why people type the wrong password. They forget which site they're on, they forget that this system forced them to change their password last month, maybe Caps Lock is on, whatever. (If you're typing the right username but the wrong password into a site, you'd better hope that the site managers don't capture your wrong attempts and then try them on other sites which they might learn that you're signed up for...)

It's also possible that your keyboard layout may not be what the operating system thinks it is. All keyboards are electrically identical, so the only way Windows (etc) has to know what the top-left letter key means, is the keyboard settings which you gave it. If someone replaces their QWERTY keyboard with an AZERTY one, without informing the system via some obscure part of the Control Panel, the top-left letter key might look like an A, but the system will see a Q. And the person typing will still see the same blob or asterisk. (In our environment, we use 15 different keyboard layouts, and people tend to move around and take their keyboard with them. And even if they know how to set up the layout for their current Windows session, they usually don't know that they should also change the default layout so that the new keyboard works correctly at logon time as well.)

This "security feature" must cost millions of dollars in helpdesk calls every year. Eevryone who has ever worked on a support phone line has had people call who are "convinced" that they are typing the right password. Sometimes you can get them to type the password in another box and then paste it across, but that's not always possible, and explaining it to a confused user is often a nightmare in itself ("Don't click OK when the password is in the username box!").

It doesn't even make you very much more secure. Someone who really wants to steal your password while being in the same room can observe your keyboard while you type, perhaps keeping up some conversation to distract you, and after a couple of times they'll have a pretty clear idea of your password, especially since so many people choose insecure ones (hmm, did anyone think that maybe some people do that precisely because it's easier to type "rosepetal" than "h4%tfr3q" when you can't see what you've typed?). Now that we all have LCD screens, it's getting harder to sell us the fantasy that someone is parked outside our offices in a van examining the electromagnetic field from our monitor. And of course, the password-stealing spyware inside your PC gets a full view of every keystroke, unobscured by blobs. It's more than slightly ironic that the bad guys can see your password more clearly than you can.

So imagine my delight when I first saw this feature in an admirable free ZIP/RAR program called 7-Zip:

Yesss! Provided of course that there are no spies in the room, you can check the box when opening a password-protected RAR or ZIP file, so that you can see what you're typing in the password box!

Question: why isn't this feature available on every non-military password dialog box in the world?

04 September 2008

Not ready for (corporate) prime time

Since Google launched the Beta version of their Chrome browser - it's about 72 hours ago but it feels like a lot longer - we've been getting people asking why we've blocked its download to our corporate network.

The short answer is "because it's not suitable". Sure, it looks great, has better security, Javascript runs fast, etc etc. But it seems like no thought has gone into how one might go about deploying it in a business setting.

First, the only way to get it is by interacting with Google's download site. You get a small installer executable, which then goes back to Google and downloads the rest. During this time, you sit and watch. Is it installing the same code as yesterday? How can you tell? Until we can see a single .MSI file, with the usual command-line parameters allowing for totally silent installation, we can't use this.

Secondly, have you taken a look at where the installer leaves all the files which make up the browser? Well, most of them are in the profile of the user who downloaded it. On an out-of-the-box version of XP this means that your Web browser is in C:\Documents and Settings\username\Local Settings\etc etc etc. This is a disaster in many corporate environments which use roaming profiles, because they typically have fairly strict retention policies about how long old profile copies are allowed to remain on PCs. Although it could have been worse (Chrome could have installed into some other directory at the top of the profile rather than "Local Settings", thus entering the roaming part of the profile and being copied to the server when you log off), this means that in practice you're going to have multiple copies of the browser per PC, but with each individual user losing access to it every time the local profile copy is cleaned.

I wish that this was the first time we'd seen this sort of problem, but it isn't. Although "consumer" products (such as the iPhone and iPod, or any Google application you can name) tend to be the worst offenders in terms of treating your entire PC as if they own it, some business software publishers are not far behind. I hope the person who decided to place Adobe Bridge's file cache in the roaming part of the user profile is reading this.

In mitigation, perhaps I should mention that it took Microsoft about 6 years from the release of Windows NT 4.0 to get their programmers to understand the consequences of roaming profiles. Most MS products now do the right thing, although some are still quick to impose their own view of the world on "User Shell Folders" registry entries if the network is a little slow, with potentially "hilarious" consequences for unsuspecting users who didn't realise that all their new documents are being chucked into an unbacked-up directory on the local hard disk instead of their network drive.

31 March 2008

Pushing patches to 2000 PCs

Ever since Microsoft started the whole "Patch Tuesday" thing, we've been very cautious about rolling out patches (it seems like every 3 months or so, there's one patch which breaks something). Here's what we do:

- Read the descriptions to see which "critical" patches are really critical (almost none), which are probably worth installing (about half), and which are about as much use as a lottery ticket - for example, protecting you from malicious, individually targetted Excel sheets (the other half).

- Add these patches to the post-installation procedure which runs when we reinstall Windows. We do this about 300 times a year on our 2000 PCs, so after a month we have 25 or so PCs running the patches and there's a reasonable chance that we would know if a new patch was breaking something major.

- We don't push patches to all the PCs unless we think there's a credible threat of a major malware outbreak. Since MS-Blast and Sasser, there hasn't been anything very damaging, but about twice a year we decide to take the "sky is falling" threats a little more seriously. It's a good test of our patch push system, anyway. (Update 2009-01-20: we started pushing MS08-067 on the day it appeared, because there seemed to be a credible threat. In the three months since then, we have managed to patch about 98% of our PCs. The limiting factor is that we have laptops which people don't bring into work very often, and a few PCs which are very rarely switched on, partly due to our organisation's sometimes Kafkaesque staffing rules.)

All that's fine, but we still found ourselves removing three or four trojans a week. Most of these are (of course) undetected by anti-virus scanners, either because they are rootkits or because the scanner isn't up to date (sometimes we identify malware before the excellent VirusTotal has even a single scanner which can detect it).

I started to think about generic ways to fix this - perhaps involving patching browser executable files so that the offending Javascript functions don't work, like I did 8-9 years ago with a tool called ATLAS-T to patch Microsoft Word to eliminate macro viruses - but then I read an article about the number of malware links served up by search engines, which said that most of the techniques used by these trojans exploit IE vulnerabilities for which patches exists.

So we decided to try an experiment. We applied all of the current patches (including the various ones for IE6) to 20% of our PCs (all of which run XP). Over the next couple of weeks we noticed that none of the trojans which appeared, were on patched PCs. We've now patched 95% of our PCs and trojans, etc have practically disappeared.

There was a downside, however. At least one Access application stopped working due to an authentification DLL being replaced "for security reasons", and since only about 5 PCs run this particular application, we didn't spot it until half the PCs were patched. The solution was to pretend that the patch was in place - our home-brew procedure looks for %SystemRoot%\KBnnnnnn.LOG and if it finds it, assumes the patch is installed, so we just create a fake log file on the PCs which run this application. But it will probably need rewriting before we push IE7 out to our desktops.

So, for once, the obvious advice ("keep your PC up to date with patches") actually has some use. Does it justify buying software to do it? Of course not (we do everything from distributed command-line scripts), but it may save you some malware cleanups, and the damage to the stability of your application platform by the patches may be sufficiently limited to actually make the whole thing worthwhile.

11 January 2008

True rootkits are here

I'm not a big fan of malware reporting in the media - generally it comes down to flacking press releases of the "sky is falling, buy our software" variety from some A/V company's marketing manager with a quota to meet - but I did take note of this article on the BBC's site.

Turns out that somebody has finally made what I would call a true rootkit: malware that loads from the master boot record (MBR) before the OS loader and, if the stealth is done right, will be completely invisible to anything downstream.

Although there's a pretty tight limit to what you can hide in the MBR (446 bytes of code, if I remember my DOS-based virus studies from the early 1990s), the malware can also probably take advantage of the rest of track 0, which on a modern multi-GB disk could be pretty big.

So, if your anti-virus toolkit currently contains Rootkit Revealer, a spare bootable copy of Windows, and some registry hacks to get persistent malware files to die, you might also want to keep a DOS floppy handy. If FDISK /MBR doesn't work, you might wish you'd saved a copy with BOOTSEC (a utility I've used more or less every day since 1994) so you could restore it later.