Many of us find ourselves with multiple gadgets – in our pockets, our homes, our cars, our offices – and these gadgets are increasingly built to talk to each other. Camera phones upload straight to the Web and connect through WiFi and Bluetooth to computer networks; copiers can be used as printers by branch offices on the other side of the world; and our cars pull down information from the sky on the latest traffic and weather conditions. Even the documents (licenses, passports, payment cards) that we carry around with us contain RFID chips. And all these sensors and transmitters are constantly busy, quietly and covertly collecting and giving away our personal information to other devices. Every time such information is transmitted and received, there is a risk – the risk that the data may be intercepted, or worse, suborned, by people other than those for whom it was originally intended, and used for nefarious purposes.
The major problem here is that such devices were designed for our convenience, usually not with security in mind. For example, according to iSuppli by the end of 2010, almost 80% (up from about 50% in 2009) of cell phones will have a built-in GPS system, which can be used to send information on the user’s whereabouts to another place. For the most part, we see such technology as a welcome innovation – for example, finding the nearest coffee shop when we are in a strange city, or helping us discover which of our friends is close at hand, with social media applications such as Foursquare or Google Latitude. We may have the option of allowing such information to be transmitted or of blocking it when we first start to use the application, but there are other ways of tracking phones (and people) without our consent or knowledge.
The phone network is not the only system that provides information on our whereabouts; many digital cameras now include GPS receivers, permitting the geotagging of photos. Most modern cars are equipped with satellite navigation systems, sending information backwards and forwards. Once again, we welcome this as a convenience – being informed of a traffic snarl-up in the next city and instructed how to avoid it. Many rental car companies already use this GPS data, and track the information so obtained, examining renters’ usage patterns, and searching for violations of the rental agreement.
But what happens if the real “bad guys” get hold of this information? What if they surreptitiously obtain the GPS information from your phone or from your car and find out that you’re 200 miles from home, heading away? It might be a good time to burglarize your home. Or, what about a suspicious spouse spying on the extra-marital activities of a partner? Even the “fun” geotagging applications, and the similar features of Twitter, are great ways to tell the world (including any potential wrongdoers) where you are – or, possibly more importantly, where you are not. (For example, a salesman is supposedly making a sales call, but his employer discovers his car has been parked outside Disneyland for the past four hours). Not only the car GPS systems, but the on-board electronics that live under the hood of most modern cars and can be used to “phone home”, such as the General Motors’ OnStar system, are at risk. As the result of a break-in to such a system, your car could be remotely unlocked by a thief and driven away, or you (the rightful owner of the vehicle) could be reported to the police as the thief, and spend an unpleasant time establishing your bona fides to the law. In just 3 years, since the introduction of the original iPhone, GPS chipset prices have dropped dramatically, now at one-third the original cost, while also being more accurate and faster. Incorporating this technology into just about anything becomes trivial. In fact, it is currently estimated that even half of video game players will have an embedded GPS by 2014.
Computer systems, at home and at work, are obvious security targets – but the back doors may not be that obvious. Networking over the air (WiFi), or over power lines, and the use of Bluetooth gadgets reduce clutter and introduce flexibility – but they also introduce risk. The channels between devices can be tapped and the traffic read, and though of course it is possible to encrypt the traffic, such encryption can slow down and impede users, and the encryption itself may be vulnerable if it is not well implemented. Often, though, the security and encryption on these devices is so troublesome to set up that many users (including corporate IT departments) don’t bother, or set things up incorrectly, falsely assuming they are protected.
It may be argued that scientists are more at risk than the average population, especially those in academic circles. The trust-based early internet that Clifford Stoll described in his seminal book, “The Cuckoo’s Egg”, still exists to a large degree, over 20 years since the book was written. Computer security inside the academic environment is often not taken as seriously as it is in the business world, and this puts researchers, especially those who are involved in research with potential commercial application, at risk with regard to their data. Scientists working on politically controversial projects, such as climate change research, have also famously found themselves targets for security attacks.
Even if you’re not using a wireless network or a Bluetooth keyboard, the electromagnetic emissions from the equipment you use can be monitored by the “bad guys”, in extreme cases actually allowing them to read your screen through walls.
You would think that most people by now would know something about the risks of viruses on their computers, yet many people happily download and install applications, oblivious of the fact that the software could hijack the cameras and microphones and be used as eavesdropping “bugs” to transmit audio and video to people who shouldn’t be listening in. Of course misusing computer peripherals is sometimes an officially sanctioned activity, as shown in the case of the Pennsylvania school district that distributed student laptops with what the district termed “tracking-security” features, but could better be described as camera “spyware” – taking photographs of unsuspecting students in their homes.
The proliferation of USB devices over the past few years has helped computer users by allowing the easy connection and setup of many different gadgets. At the same time, it has increased the opportunities for mischief. Small USB key loggers, similar in appearance to thumb drives or keyboard cable extenders, can remain undetected for months at a time, faithfully recording every password, confidential memo, and private thought before the device is retrieved (or the data automatically uploaded) and the contents analysed – regardless of how tightly locked down your own office’s network is. Even innocent-seeming devices such as USB flash drives and CD-ROMs distributed at trade fairs, etc. can be abused to install back doors and “Trojan horses”, sending confidential data such as banking passwords back to base, or on mobile phones, a “free” game that actually dials premium-rate phone services, racking up hundreds of dollars in charges that go unnoticed (until the next phone bill!).
Nor, in case you’re wondering, does putting your thoughts on paper make them any more secure. Many office printers, copiers and faxes, etc. now incorporate hard disks or other memory to store the printed and scanned images. These hard disks are readily accessible for maintenance purposes, and can be removed and their contents read at leisure to discover what documents have passed around the office in the recent past. Even worse, many people don’t realize that the printouts that they make from these printers are all encoded with microscopic bits of information originally intended to thwart current counterfeiting, which now can be used for tracking purposes. And when you leave the building, all the smart cards and RFID chips that you carry around and use – the corporate entry cards, mass transit cards, passports, credit and debit cards, etc. – can also let people know who and where you are and what you’re up to. For the most part, we can regard many of the uses of these security and privacy violations as essentially harmless, if irritating. Text messages on your cell phone inviting you to enjoy a discounted latte that pop up every time you come within 100 yards of a coffee shop that you’ve visited in the past are annoying. But the thought of criminals obtaining or abusing such information is a different matter. The “safety blanket” provided by these RFID chips is an illusion, since the chips, together with their content, can be cloned, with all the attendant problems of identity theft.
What are the answers?
First, the makers of such devices have a duty to inform the public of any known vulnerabilities associated with their devices. Though it is obviously tacitly understood at some level that a car’s GPS system is in contact with a central station to obtain traffic information, etc., the deeper implications of such communication will not sink in for the majority of users. Though some may see this as a devaluation of the public image of any company that admits to such vulnerabilities, experience shows that companies that admit their faults promptly often increase their stature in the public eye (Toyota and BP come to mind as recent examples of companies that have failed in this regard). Whether this should be a legal duty is another matter – it is probably impossible for a company to come up with an exhaustive list of every way in which its products could be abused, and it would be patently unfair to penalize a company for a trivial failure of imagination. Industry, too, needs to get behind this and create standardized guidelines on the use of sensor data that contains personal information. There needs to be an industry-wide “best practices” standard to govern the implementation of these sensors at the device level, which can be explained to end-users in a standardized format so that the use of these is consistent. For example, the GPS systems in cameras, phones, computers, and cars are all used differently, but all collect personal location data. There should be standardization of how that data is used, consistency in the explanation to users, and placement of controls over this data.
More importantly, though, companies engaged in such designing and manufacturing must act proactively, by incorporating security as an integral part of their products and as an integral part of the design process, balancing the accelerated demand for new features against a possible regulatory backlash that may occur if security becomes a populist consumer issue.
There are real-world examples of how security is already being taken seriously in areas which might seem surprising. Sharp and Xerox (among others) offer and promote encryption on the hard drives built into their copiers and printers. Such encryption significantly reduces the value of a stolen or illegally accessed hard drive. Many laptop manufacturers now offer the option to disable USB ports (this is standard operating procedure in many corporate Windows desktop builds) and several cell phone manufacturers promote models without cameras. Unfortunately, these solutions fail to address the root cause of the issue and only “patches” one of many entry points.
In the early 2000’s, Microsoft suffered publicly from a loss of reputation with regard to security issues, causing Bill Gates in 2002 to declare a new era of “Trustworthy Computing” for Microsoft. Some seven years later, the marketing points for Windows 7 focused not on the new features, but on the security enhancements, both embedded in the architecture, and from the point of view of their ease of use.
Industry players should collaborate and implement self-regulation to better define the collection and use of data from the different sensors in our lives, and work with governments to enhance the penalties for any interception (deliberate or unwitting) of information containing personal data not intended for the person or organization reading it. This should apply whether or not that information is used for gain. Industry should take immediate action on this, before well-meaning but clumsy, inflexible, and ineffective top-down government regulation is imposed.
In their everyday working practices, computer scientists outside as well as inside the IT industry need to understand the essence of security and how the data they collect will affect the overall system, in order to mitigate the risk of unintentional data leakage, which leads to other security issues. A change in attitude regarding the responsible disclosure of exploits by independent researchers also needs to take place, in that the discoveries need to be welcomed and acted upon, rather than being seen as challenges to professional competence.
Currently, there is a relative lack of awareness of the risks posed by data breaches, with the majority of technology companies being more concerned with business continuity than security and a large amount of R&D being devoted in that direction. A consistent approach needs to be taken with regard to the alerting, at a user interface level, of users, about the privacy issues resulting from the use of different sensors and applications and subsequent associated risks, as well as a unified hardware and basic software (OS) approach with regard to security. The mistaken (in the eyes of the author) emphasis on continuity rather than security points to what is probably the most important point of all; that companies and organizations, including research institutions, need to be aware of the need for privacy and other security protection measures within their domains as well as the risks to their businesses posed by security breaches, and to seek expertise to guard against these risks.
2. Koscher, K., Czeskis, A., et al, Experimental Security Analysis of a Modern Automobile, http://www.autosec.org/pubs/cars-oakland2010.pdf
4. As verified by NIAP: http://www.niap-ccevs.org/st/vid2012/
Editorial, “Security Ethics”, Nature, 14 January, 2010