Sensors, Vulnerabilities, and Data Protections

Many of us find ourselves with multiple gadgets – in our pockets, our homes, our cars, our offices – and these gadgets are increasingly built to talk to each other, often automatically and invisibly. Camera phones upload straight to the Web and connect through WiFi and Bluetooth to unseen computer networks; the printer next to your desk can suddenly start printing out documents sent from a branch office on the other side of the world, and our cars automatically pull down information from the sky on the latest traffic and weather conditions. Even common documents (licenses, passports, payment cards) that we carry around with us contain RFID chips. And all these sensors and transmitters are constantly busy, silently collecting and giving away our personal information to other devices, often without our knowledge. Every time such information is transmitted and received, there is a very real risk that the data may be intercepted by people other than those for whom it was originally intended, and tampered with or abused for criminal, terrorist, or other purposes.

It may be argued that scientists are more at risk than the average population, especially those in academic circles. For all the theoretical discussion of computer security, inside the academic environment real security issues are often not taken as seriously as they are in the business world, and this puts researchers, especially those who are involved in research with potential commercial application, at risk with regard to their data. Scientists working on politically controversial or emotionally charged projects, such as climate change or stem cell research, have also famously found themselves targets for security attacks.

The Risk of Convenience
Numerous types of sensors were designed for our convenience, usually not with security in mind. For example, according to iSuppli (1), by the end of 2010 almost 80% (up from about 50% in 2009) of cell phones will have a built-in GPS system, which can be used to send information on the user’s whereabouts to another place. For the most part, we see such technology as a welcome innovation – for example, finding the nearest coffee shop when we are in a strange city, or helping us discover which of our friends is close at hand with social media applications. We may have the option of allowing such information to be transmitted or of blocking it (with some level of difficulty once enabled) when we first start to use the application, but there are other ways of tracking phones (and people) without our consent or knowledge.

The phone network is not the only system that provides information on our whereabouts; many digital cameras now include GPS receivers, permitting the automatic geotagging of photos. Most modern cars are equipped with satellite navigation systems, sending information backwards and forwards.

Hidden Vulnerabilities
Computer systems, at home and at work, are obvious security targets – but the back doors may not be that obvious. Networking over the air (WiFi), or over power lines (2), and the use of Bluetooth gadgets help to reduce clutter and introduce flexibility, but they also introduce risk. “Free” wireless access points are sometimes set up to capture WiFi traffic, and it is now possible to spoof a GSM cellular tower to capture all cellular telephone calls in a specific targeted area (3). Clearly, politicians and celebrities are not immune to hacking, as seen by the recent revelations that members of the British press were routinely listening in on the voice mails of its citizens, including the royals (4).

To prevent channels between devices from being compromised, it is possible to encrypt the traffic; but such encryption can slow down and impede users, and many “secure” products are quite vulnerable since the protocols are not well implemented. Often, though, the security and encryption on these devices is so troublesome to set up that many users (including corporate IT departments) don’t bother, or set things up incorrectly, falsely assuming they are protected.
Even if you’re not using a wireless network or a Bluetooth keyboard, the electromagnetic emissions from the equipment you use can be monitored remotely, in extreme cases actually allowing someone to read your screen through walls or from across the street.

You would think that most people by now would know something about the risks of viruses on their computers, yet many people happily download and install unknown applications from dubious sources, oblivious of the fact that their new software could hijack their PC’s camera and microphone and surreptitiously transmit audio and video to parties unknown. In fact, the simple microphones found in all laptops can be used to determine what keys are being typed on those keyboards. Of course, misusing computer peripherals is sometimes an officially sanctioned activity, as shown in the case of the Pennsylvania school district that distributed student laptops with what the district termed “tracking-security” features, but could better be described as Big Brother “spyware,” taking photographs of unsuspecting students in their homes (5).

While the proliferation of USB devices over the past few years has been a boon for computer users, it has also increased opportunities for data hacking. Small USB keyloggers, similar in appearance to thumb drives or keyboard cable extenders, can remain undetected for months at a time, faithfully recording every password, confidential memo, and private thought before the device is retrieved (or the data automatically uploaded) and the contents analysed – regardless of how tightly locked down your office’s network is. Even innocent-seeming devices such as USB flash drives and CD-ROMs distributed at trade fairs, etc., can be used to install back doors and “Trojan horses,” sending confidential data such as banking passwords back to base, just as a “free” game downloaded to a mobile phone can open that device up to unlimited abuse.

Nor, in case you’re wondering, is the written word any more secure. Many office printers, copiers and faxes now incorporate hard disks or other memory devices to capture, store and transmit the printed and scanned images (we don’t think of them as such, but modern copiers are actually sophisticated computers that can be easily compromised). These memory devices are designed to be accessible for maintenance purposes: they can be removed and their contents read at leisure. Most people don’t realize that the printouts and copies from many of these devices incorporate microscopic anti-counterfeiting information, which can also be used for tracking purposes. And when you leave the building, all the smart cards and RFID chips that you carry around – the corporate entry cards, mass transit cards, passports, credit and debit cards, etc. – can also let people know who and where you are and what you’re up to.

We can regard many of the uses of these security and privacy violations as essentially harmless, if irritating. Vending machines in Japanese train stations, for example, can automatically recommend drinks to customers based on their age, sex, and other factors (6). Text messages on your cell phone inviting you to enjoy a discounted latte that pop up every time you come within 100 yards of a coffee shop that you’ve visited in the past are annoying. But the thought of criminals obtaining or abusing such information is a different matter. The “safety blanket” supposedly provided by these RFID chips is an illusion, since the chips, together with their content, can be cloned, with all the attendant problems of identity theft.

What are the answers?
In their everyday working practices, computer scientists outside as well as inside the IT industry need to understand the essence of security and how the data they collect will affect the overall system, in order to mitigate the risk of unintentional data leakage, which leads to other security issues. One way to do this is for researchers to find the flaws in the systems they use, but manufacturers seldom welcome these efforts. A change in attitude regarding the responsible disclosure of exploits by independent researchers needs to take place, in that the discoveries need to be welcomed and acted upon, rather than being seen as challenges to professional competence (7).

Currently, there is a surprising lack of awareness of the risks posed by data breaches, with the majority of technology companies being more concerned with business continuity than security (8) and a large amount of R&D being devoted in that direction. As sensors become cheaper and more commonplace, a consistent approach needs to be taken with regard to alerting consumers, at a user interface level, about the privacy risks resulting from the use of different sensors and applications, as well as a unified hardware and basic software (OS) approach to security. The emphasis on continuity rather than security points to a more important point: that companies and organizations, including research institutions, need to be aware of the need for security and privacy protection measures within their domains as well as the risks to their businesses posed by security breaches, and should take the necessary steps to guard against these risks. At least three steps are vitally necessary to head off what I see as a serious crisis developing — serious for individuals who will suffer as a result of abuses of their privacy and personal information, and also for the many companies and organizations that will suffer from all-too-predictable legislation enacted to protect citizens from the “evils” of technology being perverted by unscrupulous forces.

First, the makers of all devices that are capable of collecting and/or transmitting data have a duty to inform the public of any known vulnerabilities associated with their devices. Whether or not this should be a legal duty is another matter – it is probably impossible for a company to come up with an exhaustive list of every way in which its products could be abused. Industry, too, needs to get behind this and create standardized guidelines on the use of sensor data that contains personal information. There needs to be a cross-industry “best practices” standard to govern the implementation of these sensors at the device level, which can be explained to end-users in a standardized format so that the use of these is consistent.

Second, companies engaged in such designing and manufacturing must act proactively, by incorporating security as an integral part of their products and of the design process, balancing the accelerated demand for new features against a possible regulatory backlash that may occur if security becomes a populist consumer issue. There are real-world examples of how security is already being taken seriously in areas that may seem surprising. Some copier manufacturers (9) offer and promote encryption on the hard drives built into their copiers and printers. Such encryption significantly reduces the value of a stolen or illegally accessed hard drive. Many laptop manufacturers now offer the option to disable USB ports (this is standard operating procedure in many corporate Windows desktop builds), and several cell phone manufacturers promote models without cameras. Unfortunately, these solutions fail to address the root cause of the issue; they are merely “patches” for a few of the holes in what is a veritable Swiss cheese of data insecurity.

Third, and perhaps most important, industry players must collaborate and implement stringent self-regulation to better define the collection and use of data from the different sensors in our lives. Moreover, global business must work closely with government to strengthen the penalties for any interception of information containing personal data not intended for the person or organization reading it.

In short, we stand at a crossroads in terms of dealing with data security, and both paths are, for different reasons, highly unattractive. Prompt, meaningful self-regulation to avoid a coming crisis seems just as impossibly difficult to some as suffering the painful, throw-out-the-babies-with-the-bathwater over-reaction of technically unsophisticated, politically motivated government regulators. Yet, to this writer at least, the former road is far more preferable. I am aware that cross-industry cooperation, not to mention industry-government cooperation, is no easy matter, but the consequences of delaying could be catastrophic. It is essential to avert this crisis so that consumer choice isn’t restricted, manufacturers aren’t shackled, and researchers aren’t thwarted in their development work by a new wave of draconian personal data protection laws.

References & Notes
1. J. Rebello, iSuppli, 16 July 2010;
2. R. Newman, S. Gavette, L. Yonge, R. Anderson, paper presented at the Symposium On Usable Privacy and Security (SOUPS), 12-14 July 2006;
3. T. Bradley, PC World, 1 August 2010;
4. D. Van Natta Jr., J. Becker, G. Bowley, New York Times, 1 September 2010;
5. C. Matyszczyk, CNet News, 20 February 2010;
6. D. Wakabayashi, J. Osawa, The Wall Street Journal, 3 September 2010;
7. Editorial, “Security Ethics,” Nature 463, 136 (2010).
8. J. Goodchild, ComputerWorld, 24 May 2010;
9. As verified by NIAP (
10. We thank H. Ashton for useful discussions and comments.

Posted by jeffrey

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

where to buy viagra buy generic 100mg viagra online
buy amoxicillin online can you buy amoxicillin over the counter
buy ivermectin online buy ivermectin for humans
viagra before and after photos how long does viagra last
buy viagra online where can i buy viagra
William H. Saito