Data Protection: Term Definition

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now

Introduction

At the beginning of the 21st century, data privacy and security is under threat.. Systems can generally be attacked by errant bits in one of three basic ways:

  1. through the corruption of a system’s hardware or software;
  2. through using an insider with access privileges; or
  3. through external hacking, as well as through combinations of these (e.g., through having an insider reveal vulnerabilities that facilitate subsequent hacking).

The closer the attack source is to the system’s core the more trouble a defence may be, but deep threats focus suspicion on fewer potential attackers. A typical tale of corruption could involve a rogue employee of a U.S. microprocessor firm queering some circuits in every PC chip so that they all go bad simultaneously at just the right time. How simultaneity is ensured without premature discovery is never explained. A slightly more plausible threat is the planting of a bug in a specific system (so that an external signal or specified condition makes the system go awry). Last is the hacker route. It is opposed to using deterrence policies as the main tools to prevent data loss and introduce effective data protection techniques.

Threats and Hacker Attacks

Most systems divide the world into at least three parts: outsiders, users, and superusers. One popular route of attack on Internet-like networks is

  • systematically guessing someone’s password so that the outsider is seen as a user, and then
  • exploiting the known weaknesses of operating systems (e.g., Unix), so that users can access superuser privileges (Carey, 2004).

Once granted superuser privileges, a hacker can read or alter the files of other users or those of the system; can control the system under attack; can make re-entering the system easier (even when tougher security measures are subsequently enforced); and can insert rogue code (e.g., a virus, logic bomb, Trojan horse, etc.) for later exploitation. The damage a hacker can do without acquiring superuser privileges depends on the way systems allocate ordinary privileges. A phone user per se can do little damage to the phone system (Lee, 2002).

Computer networks are especially vulnerable to abusers when certain privileges are granted without being metered. Any system with enough users will contain at least one who would abuse resources, filch data, or otherwise gum up the works. Although mechanisms to keep nonusers off the system matter, from a security point of view, limiting what authorized users can do maybe more important. Another method of attack–applicable only to communications networks open to the public–is to flood the system with irrelevant communications or computerized requests for service (Beyleveld and Townend 2004).

Systems in which either service is free or accountable can be evaded in some other way are prone to such attacks. The weakness of such attacks is that they often require multiple sources (to tie up enough lines) and separate sources (to minimize discovery), and their effects last only as long as calls come in. Because communications channels within the United States are much thicker than those that go overseas, overseas sites are a poor venue from which to launch a flooding attack. Most problems of security for systems come from careless users, poor systems administration, or buggy software. Users often choose easily guessed passwords and leave them exposed (Lee, 2002).

Poorly administered systems include those that let users choose their own passwords (notably easily guessed ones), keep default passwords or backdoors in operation, fail to install security patches or give users access to total system resources to read or write files (particularly those that control important processes), from which they should be barred. Common bugs include those that override security controls or permit errant users to crash the system, or in general make security unnecessarily difficult or discretionary (Carey, 2004).

The Data

Data can be any information that can be related to the individual, not just text. It could therefore include photographs, audio and video material, records from automatic systems such as CCTV or a swipe-card security system, medical data. This means that much more information is now within the scope of Data Protection, and it affects far more people and activities. The Principles refer to ‘processing personal data. The definition of processing in the Act is very broad, including obtaining the data, holding or storing it, using it in any way, changing it, passing it on to others, erasing it or destroying it (Beyleveld and Townend 2004).

This means that from the moment you acquire the data, right up to the time you dispose of it, you have to treat it properly. Many of the principles refer to your ‘purpose(s)’ in holding the data. Where Data Protection is concerned the ‘purposes’ defined by the Information Commissioner are fairly general. Everything to do with personnel, for example—including pay, pensions, holiday and sickness records, work planning and so on—would be within the ‘Staff Administration’ purpose. Much library work would come under ‘Information and databank administration (Carey, 2004).

The Data Controller

The Act gives the main responsibility for Data Protection to the ‘Data Controller’. This is the ‘person’ who decides why and how personal data is processed. However, a ‘person does not have to be an individual; a company (or any ‘incorporated’ organization) is also a legal person. Although an individual can be a Data Controller in their own right, it is much more likely that the organization you work for will be the Data Controller. This is reassuring for anyone who is carrying out activities on behalf of their employer (Beyleveld and Townend 2004). Even if a specific employee is made responsible for Data Protection compliance, the organization itself remains the Data Controller.

Organizations are only likely to be personally liable if they ‘knowingly or recklessly’ contravene the employer’s policies and procedures. An organization cannot be a Data Controller on behalf of another organization. This means that each separate legal entity has to consider whether it is a Data Controller, even if they are all part of a group, trading to the outside world as a single body. The group may well lay down common standards but it will not be the Data Controller for data held or used by individual companies. Where organizations work together, they may be Data Controllers jointly over the same data. This can be a complex situation.

What it comes down to is that the Data Protection Compliance Officer, wherever they are located, will have to work closely with people from all over the organization to ensure that policies and procedures are consistent and comprehensive. Even if the library or information services department has excellent Data Protection practice, the organization as a whole will remain vulnerable unless all departments take it equally seriously. The Compliance Officer will also have to ensure that all staff are given the guidance, information and support they need so that when they handle personal data in the course of their work they are enabled to comply with the organization’s policies and procedures (Lee, 2002).

The Data Processor

Many routine activities nowadays are outsourced. Where these involve the processing of personal data, the company that actually carries out the work will usually be a Data Processor. For example, a Data Controller might use external agencies to provide a payroll service, mailing facilities, marketing, delivery of mail-order goods, IT support and many other services. In these cases, the essential distinction is that the Data Processor has no interest in the content of the data they are processing. They don’t decide who will be paid or how much; they just do the sums and get the bank transfers to the right place at the right time. They don’t decide which information will be mailed to which person (although as a marketing company they may give advice on this) (Beyleveld and Townend 2004).

They are not concerned with the content of the database they are recovering from a crashed disk. In this situation, the Data Controller retains all the Data Protection responsibilities. In order to underline this, the Act specifies that there must be a written contract between the Data Controller and the Data Processor, making it clear that the Processor is just following the Data Controller’s instructions. In addition, the Data Processor must be able to show that they have appropriate security and the Data Controller has a specific responsibility to be satisfied that the security is adequate (Carey, 2004).

The Data Subjects

One of the central tenets of the Act, and one that should make a considerable difference once it is universally applied, is that it is essentially unfair to do things behinds people’s backs. In most cases, ‘fair’ processing requires that the Data Subject should know who is processing data about them, and for what purpose. This does not always mean that organizations need to go out of their way to slap a Data Protection statement on every form and document. In many cases, it will be entirely obvious who is collecting or using data and what for (Beyleveld and Townend 2004).

If the main users of your information service are internal, then any information you hold about them is likely to be ancillary to that held by the organization as a matter of course on all its employees. Examples of where this might apply could include the names of people within organisations on your contact database. Clearly, they would not be surprised to find themselves there for business purposes, and you are not expected to phone them up and say ‘by the way, welcome to our contact database’. The same provision might also apply to information that is already in the public domain, which you are using for obvious and reasonable purposes (Lee, 2002).

Providing just the information set out above is not necessarily enough, on its own, to guarantee fair processing. You must also give the Data Subject any other information that is needed to make your processing fair (Beyleveld and Townend 2004). There is a prevalent myth that under the new Act you always need the permission of the Data Subject if you want to use their personal data. This is untrue. However, the Act certainly encourages the Data Controller to seek consent, and in certain cases does require it.

The issue of consent must be separated from the provision of information. The requirement to provide information to the Data Subject exists regardless of whether consent is being sought or not. In practice, however, where you are seeking consent it is quite likely that this will be done at the same time as you provide information (Carey, 2004).

Processing that is necessary in connection with a contract involving the Data Subject, and processing that is necessary in order to comply with a legal requirement (the second and third Conditions) do not need the Data Subject’s consent. Most personnel records, for example, could well be processed under one or other of these conditions, as would the customer records of most paid-for services. It is important, however, to pay attention to the requirement that the processing is necessary (Beyleveld and Townend 2004).

The employer may, for example, want to process personnel records in a particular way; but if they could fulfil their part of the contract without that activity then consent may be required after all. The Information Commissioner has suggested that processing without consent but in the Data Subject’s ‘Vital interests’ (the fourth Condition) should be reserved for emergencies only, rather than used routinely, and then only for genuine life and death situations. This guidance is considered by some to be over-restrictive; nonetheless, it is clearly advisable to seek alternatives wherever possible.

For many official functions, consent is not required because of the fifth Condition. This covers processing which is necessary for the administration of justice, for government functions, and for other functions ‘of a public nature exercised in the public interest by any person’. If you are running a service as part of a public authority, or which you believe may fall under this definition, you should consult your legal department to find out whether your activities meet the fifth Condition (Lee, 2002).

The requirement in the Fifth Data Protection Principle not to hold information longer than necessary might appear daunting. Very little guidance is given in the Act as to how to assess what is ‘necessary. The starting point is the Purpose(s). Keeping the information must be necessary for the purpose(s) it is being held for. It is up to the Data Controller to make a judgment on what is necessary. The easiest cases are where there are legal or contractual reasons to keep data.

For example, detailed financial records may have to be kept for six years. So might records of any advice that was given, in case anyone who had been wrongly advised brought a case against you. If the answer to both questions is ‘yes, there may be a case for keeping it. If not, you should seriously consider whether it is ‘necessary to keep it any longer. The information which you will often decide to keep includes material that forms part of the history of the organization—or even, possibly, of the individuals concerned (Ganger and Femling 2003).

A supplementary question is whether you need to keep the personal part of the data. While records of some kind may be necessary, it might be possible to anonymise them or compile the necessary statistics and dispose of the raw data (thus also saving storage space). Before destroying any data, remember that a destruction is a form of ‘processing’, which must therefore comply with all the Data Protection Principles, including fairness and security (Carey, 2004).

Although many computer systems run with insufficient regard for security, they can be made quite secure. The theory is that protection is a point to be sought in a two-dimensional space. One dimension is the degree of access, from totally closed to totally open. A system that is secured only by keeping out every bad guy makes it difficult–or impossible–for good guys to do their work. The second dimension is resources (money, time, attention) spent on sophistication. A sophisticated system keeps bad guys out without great inconvenience to authorized users.

To start with the obvious method, a computer system in a secure location that receives no input whatsoever from the outside world (“air-gapped”) cannot be broken into (and, no, a computer virus cannot be sprayed into the air like a living virus, in the hope that a computer will acquire it). If insiders and the original software are trustworthy (and the NSA has developed multilayer tests for the latter), the system is secure (although often hard to use). Such a closed system is, of course, of limited value, but the benefits for some systems (e.g., nuclear systems) of freer access are outweighed by even the smallest chance of security vulnerabilities (Ganger and Femling 2003).

Deterrence Tools

The challenge for most systems, however, is to allow them to accept external input without putting their important records or core operating programs at risk. One way to prevent compromise is to handle all input as data to be parsed (the process in which the computer decides what to do by analyzing what the message says) rather than as code to be executed directly. Security, then, consists of ensuring that no combination of computer responses to messages can affect the core operating program, indirectly or directly (when parsed, almost all randomly generated data result in error messages).

To pursue a trivial example, there are no button combinations that can be pressed that would insert a virus into an ATM. Less trivially, it is very hard to write a virus in a data-base manipulation language such as structured query language (Ganger and Femling 2003).

Unfortunately, systems must accept changes to core operating programs all the time. In the absence of sophisticated filters, a tight security curtain may be needed around the few applications and superusers allowed to initiate changes (authorized users might need to work from specific terminals hardwired to the network, an option in Digital’s VAX operating system). Another method to cut down on viruses and logic bombs is to operate solely with programs found on erasable storage media, such as CD-ROMs (Carey, 2004).

The technologies of encryption and, especially, of digital signatures provide other security tools. Encryption is used to keep files from being read and to permit passwords to be sent over insecure channels. Digital signatures permit the establishment of very strong links of authenticity and responsibility between message and messenger. A digital signature is used to create a message hash with a private key for which only one public key exists. If a user’s public key can unlock the hash and if the hash is compatible with the message, the message can be considered signed and uncorrupted (Lee, 2002).

Computer systems can refuse unsigned messages or ensure that messages really originated from other trusted systems. The private key never has to see the network. The use of digital signatures is being explored for Internet address generation and for secure Web browsers. Users as well as machines, and maybe even individual processes, may in the future all come with digital signatures. Firewalls offer some protection, but, even though they are the most popular method for protecting computers attached to the Internet, they need a good deal of work before they can be used reliably and without considerable attention to detail when being set up (Ganger and Femling 2003).

Client-server architectures suggest a second-best approach to security. Absent constant vigilance by all users, client computers are hard to protect. They are as numerous as their users (and often as variegated); they often travel or sit in unsecured locations, and tend to run common applications over commercial operating systems. Client computers are “owned” by their users who tend to upload their own software, use their own media, and roam their favourite Web sites.

This helps propagate viruses (Ganger and Femling 2003). Traditionally, viruses infected the computers they run on and little else; but tomorrow’s more intelligent versions may learn to flood or otherwise disable networks, and seek out specific information on servers in order to pass it along, or corrupt it. Servers, for their part, hold the core objects (information bases, processing algorithms, and system control functions) from which clients can be refreshed (Lee, 2002).

Servers are few in number (which facilitates auditing and monitoring), and they rarely travel. They can be secured behind physical walls and semantic firewalls. They are “owned” by their institutions and thus unlikely to host unnecessary applications. They are also more likely to run proprietary or heavyweight operating systems which are inherently more secure. A strategy that solves the easier problem of protecting servers may provide information assurance; however, network servers also must be protected for assured service and they tend to run commercial network operating systems which are inherently more vulnerable. (Carey,2004).

No good alternative exists to having system owners attend to their own protection. By contrast, having the government protect systems requires it to know details of everyone’s operating systems and administrative practices–an alternative impossible to implement, even if it did not violate commonly understood boundaries between private and public affairs. In cyberspace, the forcible entry does not exist, unless mandated by misguided policy (Beyleveld and Townend 2004). The ease by which hackers can attack a system from anywhere around the globe without leaving detectable virtual fingerprints suggests that the risk of punishment is low. Hackers supported by foreign governments may be detected but later hidden (perhaps by allied TCOs) or discovered but not lie beyond extradition (Carey, 2004).

Deterrence is commonly believed (if impossible to prove to have worked–at any rate, the homeland of the United States was not attacked by a foreign force using either nuclear or conventional weapons (McKilligan and Powell 2004). By analogy, analysts have wondered whether a strategy similar to deterrence could ward off attacks on critical information systems. The argument here is that an explicit strategy of deterrence against attacks on the nation’s information infrastructure is problematic and that little would be gained from making any such policy at all specific. Any state that perpetrates harm to the UK data security can already expect retaliation (Beyleveld and Townend 2004).

Three concerns explicit deterrence: The incident must be well defined. The identity of the perpetrator must be clear. The will and ability to carry out punishment must be believed (and cannot be warded off). Two concern deterrence in kind: The perpetrator must have something of value at stake (McKilligan and Powell 2004). The punishment must be controllable. The two factors against retaliation in kind are asymmetry and controllability.

If a nation that sponsored an attack on the infrastructure itself lacked a reliable infrastructure to attack, it could not be substantially harmed in kind and therefore would not be deterred by equal and opposite threat. North Korea, for example, does not have a stock market to take down; phone service in many Islamic terror-sponsoring states is already hit-or miss. Controllability–the ability not just to achieve effects but to predict their scope–is difficult. To predict what an attack on someone’s information system will do requires good intelligence about how to get in it, what to do inside, and what secondary effects might result.

The more complex systems become, the harder predicting secondary effects becomes–not only effects inside the system but also outside it or even outside the country. Retaliation may produce nothing, may produce nothing that can be made to look like something, may produce something, may produce everything, or may affect third parties, including neutrals, friends, or UK interests. The NII, after all, is growing increasingly globalized (Beyleveld and Townend 2004).

Possible Difficulties

The difficulties involved in the three issues remaining to be discussed here–defining the incident, determining the perpetrator, and delivering retaliation–can be illustrated by eight vignettes. Note that retaliation against physical terrorism is a cleaner concept to apply than retaliation against physical terrorism, yet it has been less than clearly successful as a policy (Beyleveld and Townend 2004). If an information attack were distinguished from background noise, the perpetrator caught, and an obvious chain of evidence pointing to command by or, at least assistance to a foreign government, then something actionable would have occurred.

Now often can an attack be traced unambiguously? Perpetrators rarely leave anything as identifiable as fingerprints. Criminals often have habits that increase the chance of their being caught–they brag, they return to the scene of the crime, they inflexibly adopt a particular method, they do not clean up their signatures–but these are not hallmarks of professional operators (McKilligan and Powell 2004). Because cold, professional hacking incidents are rare (or known ones are), the chance of detecting a carefully laid plan is unknowable. Even were the perpetrators caught, tracing them to a government is hardly guaranteed: hackers neither wear uniforms nor require enormous resources or instruments hard to find outside government hands (Beyleveld and Townend 2004).

Conclusion

A nation can defend its information infrastructure by denial, detection (with prosecution), and deterrence. Denial frustrates attacks by preventing them or limiting their effects. Detection followed by prosecution of the attacker inhibits attacks and takes the attacker out of circulation. Deterrence is the threat that a nation can be punished for sponsoring such an attack. Denial and detection are straightforward. No one argues that computer systems ought to be vulnerable to attack and penetration. Most detected cases of hacker warfare are crimes and therefore merit punishment. Denial and detection may be less than satisfactory responses, however.

Defences, from one perspective, are good, but only up to a point. Although they can deny casual attacks, they fall before full-scale ones backed by the resources only a nation or some similarly financed transnational criminal organization (TCO) could provide. Commercial software systems, developed for low-threat environments, are poorly protected against rogue code. Commercial networks are penetrated all the time. The military, which needs to operate in contested realms cannot afford such vulnerability. If the constructs of information warfare are taken from or used to revive earlier practices, its advocates will have only themselves to blame for being regarded as nostalgia buffs, and metaphor will have failed.

Bibliography

  1. Carey, P. 2004, Data Protection: A Practical Guide to UK and EU Law. Oxford University Press; 2Rev Ed edition.
  2. Beyleveld, D., Townend, D. 2004, Implementation of the Data Protection Directive in Relation to Medical Research in Europe (Data Protection and Medical Research in Europe). Ashgate.
  3. Ganger, D. L. Femling, R. 2008, Mastering System Center Data Protection Manager 2007. Sybex.
  4. Lee, 2002, Data Protection Law – Approaching its Rationale, Logic and Limits (INFORMATION LAW SERIES Volume 10) (Information Law Series, 10). Springer; 1 edition.
  5. McKilligan, N., Powell, N. 2004, Data Protection: Essential Facts at your fingertips: Essential Facts at Your Fingertips: 0. BSI British Standards Institution.

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now