This week the engineers at Google remotely activated the so-called Android "kill switch," a technology that allows the company to remotely remove applications installed on users' phones. The applications in question, designed by a security expert for research purposes, were described as "practically useless." They were not used maliciously nor did they access private data, or so says Android Security Lead Rich Cannings in a company blog post. Instead, the apps simply misrepresented their purpose to encourage downloads.

"Most users uninstalled the applications shortly after downloading them," Cannings wrote, dismissing the impact of these questionable apps. But if the apps were effectively harmless, why zap them?

Zap that App!

Unlike smartphone rival, Apple, Google's transparency in the matter is actually refreshing. Apple very seldom makes any official announcement regarding applications that are removed or rejected from its App Store, unless it's responding to public outcry, such as was the case with the Pulitizer Prize winning satirist, cartoonist Mark Fiore's app or the banned Web comic version of James Joyce's Ulysses, both of which Apple later admitted were mistakes. (Censorship is a slippery slope, is it not?)

But with the Android kill switch situation, it's still seems a little odd. If the apps weren't malicious, were generally uninstalled by the duped users and were already voluntarily removed from the Market by the researcher in question, why zap them off everyone's phones, too?

The kill switch is designed to remove dangerous applications from phones - those that steal or access private user data, contain malware or viruses, or access system resources without permission... is it not?

Well, actually, no. The kill switch, per the Android Developer Terms of Service, may be used against any app "that violates the Android Market Developer Distribution Agreement or other legal agreements, laws, regulations or policies." If that's the case, then "Google retains the right to remotely remove those applications from your Device at its sole discretion."

Interesting Timing on that Kill Switch, Google...Very Interesting

But what's really interesting about this news is the timing.

The Google blog post arrived only one day after news broke about a frightening, but perhaps inflated, report from security firm SMobile Systems that described how one-fifth of Android applications expose private user data. This means, SMobile decided, these apps could be used for malicious purposes.

It was quite a leap, though, to claim that because an app accessed private info it was dangerous. A contact organizer, for example, would have access to your phone's address book, but is that really a concern? No.

A Google spokesperson also responded to CNET's coverage of the news, refuting the report's claims and reminding the public that Android apps "must get users' permission to access sensitive information," something worthy of noting, to say the least. "Developers must also go through billing background checks to confirm their real identities, and we will disable any apps that are found to be malicious," the spokesperson said, seemingly referring to the kill switch technology.

And yet, in this case, Google removed "non-malicious" apps. Yes, of course there was the obvious misrepresentation by the researcher as to the apps' purposes, the general uselessness of the apps in question and the need to enforce a Marketplace where developers play by the rules, but it's still worth pointing out that Google has activated its kill switch for non-malicious applications the company itself described as "useless." And it did so just one day after a security firm, albeit a questionable one which apparently has ties to AT&T, blasted the company for the growing number of spyware apps on the market.

If anything, the remote app zapping looks like a response to those (reportedly bogus) claims, which is either a case of very coincidental and or bad timing on Google's part, or... well... could it be that there was actually some truth behind all that hype?

Is asking for permission not really the panacea Google claims it should be on Android?

After all, over the years, putting the onus on the user to be mindful of their own security concerns has led to pop-up ads that resemble computer error messages, Facebook "recommendations" that instantly publicize your private data, user agreements and EULAs that install spyware, adware and toolbars on your computer, and a number of other undesirable situations for end users.

It seems like, when it comes to Android security, there's a fine line between safe applications politely accessing your private data with permission and those that could do a bit more, perhaps, than you had originally intended. The question is now, how much of this will be the user's responsibility to manage and how much can the user rely on Google - and its Android kill switch technology - to manage it for them?